CN116777814A - Image processing method, apparatus, computer device, storage medium, and program product - Google Patents

Image processing method, apparatus, computer device, storage medium, and program product Download PDF

Info

Publication number
CN116777814A
CN116777814A CN202210217587.6A CN202210217587A CN116777814A CN 116777814 A CN116777814 A CN 116777814A CN 202210217587 A CN202210217587 A CN 202210217587A CN 116777814 A CN116777814 A CN 116777814A
Authority
CN
China
Prior art keywords
image
model
sample
recognition
image recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210217587.6A
Other languages
Chinese (zh)
Inventor
史中强
钱晨
高�豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210217587.6A priority Critical patent/CN116777814A/en
Publication of CN116777814A publication Critical patent/CN116777814A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device, computer equipment, a storage medium and a program product, and relates to the field of image processing. The image processing method comprises the following steps: acquiring an image to be identified, and determining weight coefficients of at least two image identification models respectively aiming at the image to be identified based on a weight prediction model; the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized; respectively identifying images to be identified based on at least two image identification models to obtain a prediction label of each image identification model; and determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models. The embodiment of the application can be applied to the map field, and can emphasize the predictive label of the image recognition model with higher recognition accuracy, and ignore the predictive label of the image recognition model with lower recognition accuracy, thereby improving the accuracy of the final recognition result.

Description

Image processing method, apparatus, computer device, storage medium, and program product
Technical Field
The present application relates to the field of image processing technology, and relates to an image processing method, an image processing apparatus, a computer device, a storage medium, and a program product.
Background
With the rapid development of information technology, there are often a plurality of tools for data samples, such as images, such as a plurality of different models for processing the images, and the different models have different expressive power on different data due to different extracted features. Based on a specific fusion strategy, the model with the best performance is selected, and an image processing result with better overall performance can be obtained.
At present, positive and negative samples can be set for fusion of image processing models, confidence is calculated according to the included angle of feature vectors of the positive and negative samples, so that weights of the two image processing models are set, and for different images to be processed, the accuracy of the two fused image processing models can be unstable.
Disclosure of Invention
The application provides an image processing method, an image processing device, computer equipment, a storage medium and a program product, which can solve the problem that the accuracy of a fused model cannot be ensured for different images to be processed in the related technology. The technical scheme is as follows:
In one aspect, there is provided an image processing method, the method including:
acquiring an image to be identified, and determining weight coefficients of at least two image identification models respectively aiming at the image to be identified based on a weight prediction model; the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized;
respectively identifying images to be identified based on at least two image identification models to obtain a prediction label of each image identification model;
and determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models.
In one possible implementation, the weight prediction model is trained based on the following:
acquiring a plurality of sample images; each sample image is provided with a corresponding sample standard mark;
respectively inputting sample images into at least two image recognition models aiming at each sample image to obtain sample prediction labels of each image recognition model;
determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model;
A weight prediction model is acquired for recognition accuracy of the sample image based on each image recognition model.
In one possible implementation, determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model includes:
matching the sample prediction labels of each image recognition model with sample standard labels to obtain the matching degree of each sample prediction label;
and setting the matching degree of each sample prediction label as the identification accuracy of the corresponding image identification model.
In one possible implementation, obtaining a weight prediction model for recognition accuracy of a sample image based on each image recognition model includes:
determining an image recognition model with highest recognition accuracy for the sample image based on the recognition accuracy of each image recognition model;
and acquiring a weight prediction model based on each sample image and aiming at the image recognition model with highest image recognition accuracy of each sample image.
In one possible implementation, obtaining a weight prediction model based on each sample image for an image recognition model with highest accuracy of image recognition for each sample comprises:
For each sample image, setting the model type of the image recognition model with the highest recognition accuracy of the sample image as a sample classification label;
and training the initial weight prediction model based on each sample image and the sample classification label corresponding to each sample image to obtain the weight prediction model.
In one possible implementation, determining weight coefficients of at least two image recognition models for the image to be recognized, respectively, based on the weight prediction model includes:
inputting the image to be identified into a weight prediction model to obtain the probability that the image to be identified respectively belongs to the classification labels of at least two image identification models;
and determining the weight coefficient of each image recognition model based on the probability that the image to be recognized respectively belongs to the classification labels of at least two image recognition models.
In one possible implementation, for each image recognition model, the probability that the image to be recognized belongs to the classification label of the image recognition model is proportional to the weight coefficient of the image recognition model.
In one possible implementation manner, determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models includes:
And determining the weighted sum of the prediction labels of each image recognition model based on the weight coefficients of at least two image recognition models respectively aiming at the images to be recognized, and obtaining a recognition result.
In one possible implementation manner, determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models includes:
determining an image recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the image recognition models with the weight coefficients larger than the preset coefficients to obtain recognition results.
In one possible implementation manner, determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models includes:
determining a preset number of image recognition models with highest weight coefficients from at least two image recognition models;
and fusing the predictive labels of the image recognition models with the highest weight coefficient and the preset number to obtain a recognition result.
In another aspect, there is provided an image processing apparatus including:
The first determining module is used for acquiring the image to be identified, and determining weight coefficients of at least two image identification models respectively aiming at the image to be identified based on the weight prediction model; the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized;
the identification module is used for respectively identifying the images to be identified based on at least two image identification models to obtain a prediction label of each image identification model;
and the second determining module is used for determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively.
In one possible implementation, the method further includes a training module for:
acquiring a plurality of sample images; each sample image is provided with a corresponding sample standard mark;
respectively inputting sample images into at least two image recognition models aiming at each sample image to obtain sample prediction labels of each image recognition model;
determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model;
A weight prediction model is acquired for recognition accuracy of the sample image based on each image recognition model.
In one possible implementation manner, the training module is specifically configured to, when determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model:
matching the sample prediction labels of each image recognition model with sample standard labels to obtain the matching degree of each sample prediction label;
and setting the matching degree of each sample prediction label as the identification accuracy of the corresponding image identification model.
In one possible implementation, the training module is specifically configured to, when acquiring the weight prediction model for the recognition accuracy of the sample image based on each image recognition model:
determining an image recognition model with highest recognition accuracy for the sample image based on the recognition accuracy of each image recognition model;
and acquiring a weight prediction model based on each sample image and aiming at the image recognition model with highest image recognition accuracy of each sample image.
In one possible implementation, the training module is specifically configured to, when acquiring the weight prediction model based on each sample image and for the image recognition model with the highest accuracy of image recognition for each sample:
For each sample image, setting the model type of the image recognition model with the highest recognition accuracy of the sample image as a sample classification label;
and training the initial weight prediction model based on each sample image and the sample classification label corresponding to each sample image to obtain the weight prediction model.
In one possible implementation manner, when determining the weight coefficients of at least two image recognition models respectively for the images to be recognized based on the weight prediction models, the first determining module is specifically configured to:
inputting the image to be identified into a weight prediction model to obtain the probability that the image to be identified respectively belongs to the classification labels of at least two image identification models;
and determining the weight coefficient of each image recognition model based on the probability that the image to be recognized respectively belongs to the classification labels of at least two image recognition models.
In one possible implementation, for each image recognition model, the probability that the image to be recognized belongs to the classification label of the image recognition model is proportional to the weight coefficient of the image recognition model.
In one possible implementation manner, the second determining module is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
And determining the weighted sum of the prediction labels of each image recognition model based on the weight coefficients of at least two image recognition models respectively aiming at the images to be recognized, and obtaining a recognition result.
In one possible implementation manner, the second determining module is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
determining an image recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the image recognition models with the weight coefficients larger than the preset coefficients to obtain recognition results.
In one possible implementation manner, the second determining module is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
determining a preset number of image recognition models with highest weight coefficients from at least two image recognition models;
and fusing the predictive labels of the image recognition models with the highest weight coefficient and the preset number to obtain a recognition result.
In one possible implementation manner, the second determining module is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
determining a target recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the target recognition model based on the determined weight coefficient of the target recognition model to obtain a recognition result.
In another aspect, a computer device is provided, including a memory, a processor, and a computer program stored on the memory, the processor executing the computer program to implement the image processing method described above.
In another aspect, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the above-described image processing method.
In another aspect, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described image processing method.
The technical scheme provided by the application has the beneficial effects that:
The weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized, the final recognition result is determined based on the weight coefficients of different image recognition models and the prediction label of each image recognition model, the prediction label of the image recognition model with higher recognition accuracy can be emphasized, and the prediction label of the image recognition model with lower recognition accuracy is ignored, so that the accuracy of the final recognition result is improved.
Furthermore, the initial weight prediction model is trained through each sample image and the image recognition model with highest recognition accuracy aiming at each sample image, so that the weight prediction model can output a weight coefficient which is in direct proportion to the recognition accuracy of the image recognition model aiming at the image to be recognized, and the accuracy of a final recognition result is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an acquisition scheme of a weight prediction model according to an embodiment of the present application;
fig. 4 is a schematic diagram of a scheme for acquiring a recognition result provided by an example of the present application;
fig. 5 is a schematic diagram of a scheme for acquiring a recognition result provided by an example of the present application;
fig. 6 is a schematic diagram of an expression processing scheme provided by an example of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present specification. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g. "a and/or B" indicates implementation as "a", or as "a and B".
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
With the rapid development of information technology, there are a large number of data samples, and there are usually a plurality of tools (such as models) for processing the data, and different models have different expressive power on different data due to different extracted features. In consideration of this, based on a specific fusion strategy, the model with the best performance is dynamically selected in a local space, and a fusion model with better overall performance can be obtained. In general, as the number of individual models in an integration increases, the error rate of the integration will decrease exponentially, but when the different models perform too much differently, the result of model fusion is instead less accurate than a single model.
The related art determines a classification threshold value for classifying the plurality of training samples for each of the plurality of classification models by the classification threshold value; for each space of the plurality of classification models, dividing the output scores of the classification models with respect to the plurality of training samples into a plurality of subspaces according to probability densities of the respective output scores, thereby determining a confidence level of each unit within the plurality of subspaces, the confidence level representing a confidence level of the output score of each unit, and fusing the classification thresholds of the plurality of classification models based on the predetermined weight of each of the plurality of classification models and the classification threshold of each classification model.
This approach simply calculates a classification threshold based on positive and negative sample indices, which is not directly tied to the final model predicted outcome, but rather indirectly improves the model fusion strategy via the final outcome.
In other related technologies, feature vectors of each face image in the positive sample and the negative sample are obtained; respectively solving the included angles of the two face recognition models on the feature vectors of the face images in the positive sample and the negative sample according to the feature vectors to obtain the confidence degrees of the corresponding face recognition models; comparing the confidence results of the two face recognition models, and obtaining the combination of the weight values of the two face recognition models according to the comparison result; and according to the combination of the weight values, the weight of the two face recognition models is calculated, and the face recognition fusion model is determined, so that the face recognition accuracy of the face recognition fusion model generated by using the two face recognition models is higher than that of a single face recognition model.
The confidence is calculated according to the included angle of the feature vector, so that the weight is indirectly set, the number of models is limited, and the accuracy of the recognition result cannot be ensured for different images to be recognized.
According to the application, the model with better performance is emphasized through the weight prediction model, the model with worse performance is ignored, and even if the performance gap of different models is larger, a better fusion effect can be obtained, and the accuracy of the obtained recognition result is higher.
Embodiments of the present application may be applied to a variety of scenarios including, but not limited to, artificial intelligence, and the like. The image processing method provided by the application can be applied to water body recognition, a plurality of image semantic segmentation models are obtained through training of the existing data set, the performance effect of each model in different areas is different, the advantages of the plurality of image recognition models can be fully exerted by using the image recognition model fusion strategy based on weight prediction, and the accurate recognition result is obtained.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method provided by an embodiment of the present application, referring to fig. 1, specifically, sample images are respectively identified according to at least two image identification models to obtain sample prediction labels, so as to determine identification accuracy of each image identification model for the sample images, then, an initial weight prediction model is trained by using the identification accuracy of each image identification model and the sample images to obtain a weight prediction model, weight coefficients of at least two image identification models for the images to be identified are determined by using the weight prediction model, and then, a final identification result is determined according to the prediction labels and the weight coefficients of each image identification model for the images to be identified.
It will be appreciated that fig. 1 shows an application scenario in an example, and is not limited to the application scenario of the image processing method of the present application.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application. The subject of execution of the method may be a computer device. As shown in fig. 2, the method may include the steps of:
step 201, an image to be identified is obtained, and weight coefficients of at least two image identification models respectively aiming at the image to be identified are determined based on the weight prediction model.
Wherein the weight coefficient of each image recognition model is proportional to the recognition accuracy of the image recognition model for the image to be recognized.
Wherein the image recognition model may be a semantic segmentation network, e.g., a full convolution network (Fully Convolution Network, FCN), UNet network, unet++ network, HRNet network (high resolution network); the UNet network and the unet++ network are full-roll machine neural networks, are classified at pixel level, and output is the class of each pixel point.
Specifically, the process of the image recognition model for recognizing the image may be a process of predicting the image pixel by the semantic segmentation network.
Taking the Unet++ as an example, the Unet++ introduces a built-in set of UNet with variable depth, can provide improved segmentation performance for objects with different sizes, redesign the jumper connection in the UNet++, thereby realizing flexible feature fusion in a decoder, pruning the trained UNet++ by designing a scheme, accelerating the reasoning speed while maintaining the performance, and simultaneously training the multi-depth UNet embedded in the UNet++ architecture to excite cooperative learning among the composed UNets, so that better performance can be brought compared with independently training isolated UNets with the same architecture.
Taking HRNet as an example, the HRNet changes the connection between high resolution and low resolution from serial to parallel, maintains the high resolution characterization in the whole network structure, and introduces interaction in the high resolution and the low resolution to improve the model performance.
The weight prediction model may be a convolutional neural network, the input of the weight prediction model is an image, and the output image is suitable for the probability of being identified by at least two image identification models respectively, namely the probability of being output as different image identification models.
Taking a weight prediction model as a BagNet as an example, wherein the BagNet is a convolutional neural network for image classification, the BagNet does not consider spatial ordering, but classifies images according to small local features of the images, and the constraint on the local features can directly analyze how each part of the images affects classification, so that the method is more focused on the whole information of the images, and can learn useful spatial information well and discard the influence of noise data. The brief flow is as follows:
1) Intercepting an input image into an image block with pixels of a certain size;
2) After the image blocks are truncated, a 1×1 convolutional depth network is used on each image block to obtain a class vector;
3) Summing all the output class vectors according to the space;
4) And counting the prediction classification categories through the element with the largest category vector, and outputting the prediction probability of each category. And obtaining the probability of each category, namely the probability that the images are suitable for being identified by at least two image identification models respectively.
Specifically, the sample images may be identified based on at least two image recognition models, and the recognition accuracy of each image recognition model for the sample images may be determined, so as to determine what sample images are suitable for different image recognition models, and then the weight prediction model may be obtained based on the recognition accuracy of different image recognition models and sample image training, and the process of obtaining the weight prediction model will be described in further detail below.
Step 202, respectively identifying images to be identified based on at least two image identification models, and obtaining a prediction label of each image identification model.
The image recognition model is obtained by training based on training samples and training labels corresponding to the training samples.
Specifically, the images to be identified are respectively input into different image identification models, and the prediction labels of the different image identification models for the images to be identified are obtained.
Step 203, determining the recognition result of the image to be recognized based on the weight coefficient of the at least two image recognition models and the prediction label of each image recognition model.
In some embodiments, the prediction label of the image recognition model with the largest weight coefficient may be selected based on the weight coefficient, and the recognition result may be determined.
In other embodiments, an image recognition model with a weight coefficient greater than a preset threshold value may be selected from a plurality of image recognition models based on the weight coefficients of different image recognition models, and the recognition result may be determined based on the prediction labels of the selected image recognition models.
In still other embodiments, the prediction labels of the different image recognition models may be fused based on the weight coefficients of the different image recognition models to obtain the recognition result, and the specific process of obtaining the recognition result will be described in further detail below.
In the above embodiment, the weight prediction model predicts the weight coefficient for the image to be identified for at least two image identification models, respectively, the weight coefficient of each image identification model is proportional to the identification accuracy of the image identification model for the image to be identified, and the final identification result is determined based on the weight coefficients of different image identification models and the prediction label of each image identification model, so that the prediction label of the image identification model with higher identification accuracy can be emphasized, and the prediction label of the image identification model with lower identification accuracy is ignored, thereby improving the accuracy of the final identification result.
The process of obtaining the weight prediction model will be further described in connection with the embodiments below.
In one possible implementation, as shown in fig. 3, the weight prediction model may be trained based on:
step S301, a plurality of sample images are acquired.
Wherein each sample image is provided with a corresponding sample standard label, which may be close to or ideally identical to the actual label of the sample.
Step S302, for each sample image, the sample images are respectively input into at least two image recognition models, and sample prediction labels of each image recognition model are obtained.
Wherein the image recognition model is a model that has been trained.
It will be appreciated that the sample image is input to the image recognition model here, not for the purpose of training the image recognition model, but rather to determine the recognition accuracy of the already trained image recognition model for the sample image.
Step S303, determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model.
Specifically, step S303 determines, based on the sample standard label and the sample prediction label of each image recognition model, recognition accuracy of each image recognition model for the sample image, which may include:
(1) Matching the sample prediction labels of each image recognition model with sample standard labels to obtain the matching degree of each sample prediction label;
(2) And setting the matching degree of each sample prediction label as the identification accuracy of the corresponding image identification model.
Specifically, the matching degree between the sample prediction label and the sample standard label may be calculated by calculating the similarity, and the coincidence degree between the sample prediction label and the sample standard label, that is, the matching degree, may also be calculated by calculating the intersection ratio, and the specific method for determining the matching degree is not limited herein.
Step S304, a weight prediction model is acquired for the recognition accuracy of the sample image based on each image recognition model.
Specifically, step S304 obtains a weight prediction model for the recognition accuracy of the sample image based on each image recognition model, and may include:
(1) Determining an image recognition model with highest recognition accuracy for the sample image based on the recognition accuracy of each image recognition model;
(2) And acquiring a weight prediction model based on each sample image and aiming at the image recognition model with highest image recognition accuracy of each sample image.
In a specific implementation process, obtaining a weight prediction model based on each sample image and aiming at an image recognition model with highest image recognition accuracy of each sample image may include:
a. For each sample image, setting the model type of the image recognition model with the highest recognition accuracy of the sample image as a sample classification label;
b. and training the initial weight prediction model based on each sample image and the sample classification label corresponding to each sample image to obtain the weight prediction model.
In the implementation process, training is carried out on the initial weight prediction model through each sample image and the sample classification label corresponding to each sample image, so that the sample classification label with the highest probability output by the initial weight prediction model is the image recognition model with the highest recognition accuracy as much as possible.
Specifically, the sample image is input to the weight prediction model, the probability of candidate classification labels of different image recognition models is obtained, and the sample classification label of the image recognition model with the highest probability is further output.
In the above embodiment, the initial weight prediction model is trained by each sample image and the image recognition model with the highest recognition accuracy for each sample image, so that the weight prediction model can output the weight coefficient proportional to the recognition accuracy of the image recognition model for the image to be recognized, thereby improving the accuracy of the final recognition result.
The above embodiments illustrate a specific acquisition procedure of the procedure of acquiring the weight prediction model, and the procedure of the weight coefficient will be further illustrated in conjunction with the drawings and embodiments.
In one possible implementation manner, the determining, in step S201, the weight coefficients of at least two image recognition models for the image to be recognized based on the weight prediction model may include:
(1) Inputting the image to be identified into a weight prediction model to obtain the probability that the image to be identified respectively belongs to the classification labels of at least two image identification models;
(2) And determining the weight coefficient of each image recognition model based on the probability that the image to be recognized respectively belongs to the classification labels of at least two image recognition models.
Specifically, for each image recognition model, the higher the probability of classifying the label, the more likely the image recognition model corresponding to the classified label obtains a better recognition effect for the image to be recognized, namely, the probability that the image to be recognized belongs to the classified label of the image recognition model is in direct proportion to the weight coefficient of the image recognition model.
In the implementation process, the probability that the image to be identified belongs to the classification label of the image identification model can be directly set as the weight coefficient of the image identification model.
The above embodiment describes a determination process of the weight coefficient, and a specific acquisition process of the recognition result of the image will be further described below with reference to the drawings and the embodiment.
In some possible embodiments, the prediction labels of all the image recognition models may be fused to obtain the recognition result.
Specifically, step S203 determines, based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction label of each image recognition model, the recognition result of the image to be recognized, which may include:
and determining the weighted sum of the prediction labels of each image recognition model based on the weight coefficients of at least two image recognition models respectively aiming at the images to be recognized, and obtaining a recognition result.
Specifically, as shown in fig. 4, the prediction labels of all the image recognition models are weighted and summed based on the weight coefficients of the prediction labels to obtain a recognition result.
In some possible embodiments, the recognition result may be determined based on a preset number of image recognition models with highest weight coefficients.
Specifically, step S203 determines, based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction label of each image recognition model, the recognition result of the image to be recognized, which may include:
Determining a preset number of image recognition models with the highest weight coefficient based on the weight coefficients of at least two image recognition models respectively aiming at the images to be recognized;
and fusing the predictive labels of the image recognition models with the highest weight coefficient and the preset number to obtain a recognition result.
Specifically, after a preset number of image recognition models with the highest weight coefficients are determined, the weight coefficients of the image recognition models can be processed in proportion.
As shown in fig. 5, the weighting coefficients of the two image recognition models with the highest weighting coefficients are respectively 0.5 and 0.3, and then the weighting coefficients can be adjusted to be 0.625 and 0.375 in proportion.
In still other possible embodiments, the prediction labels of the image recognition models with larger weight coefficients may be fused to obtain the recognition result.
Specifically, step S203 determines, based on the weight coefficients of the at least two image recognition models for the image to be recognized and the prediction label of each image recognition model, the recognition result of the image to be recognized, which may include:
determining an image recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the image recognition models with the weight coefficients larger than the preset coefficients to obtain recognition results.
Specifically, the image recognition models can be screened according to the weight coefficients, whether the weight coefficients are larger than the preset coefficients is judged, the weight coefficients of the image recognition models can be processed in proportion after the image recognition models with the weight coefficients larger than the preset coefficients are determined, and detailed processes can be seen in the processing modes and are not repeated.
In order to more clearly illustrate the image processing method of the present application, the image processing method of the present application will be further described below with reference to examples.
As shown in fig. 6, in one example, the image processing method of the present application may include the steps of:
1) Acquiring a plurality of sample images, namely source data test set images shown in the figure; each sample image is provided with a corresponding sample standard label, namely a source data test set label shown in the figure;
2) Respectively inputting sample images into at least two image recognition models aiming at each sample image to obtain sample prediction labels of each image recognition model; the image recognition model is obtained based on an initial sample and initial annotation training, namely, a source data training set image shown in the figure and a source data training set annotation training;
3) Matching the sample standard label with the sample prediction label of each image recognition model, namely calculating an evaluation index shown in the figure;
4) Determining the recognition accuracy of each image recognition model for the sample image, and representing the recognition accuracy by using a model score corresponding to each image recognition model in the image;
5) Selecting an image recognition model with highest model score, and setting the model type of the image recognition model as a sample classification label, namely a labeling type shown in the figure;
6) Training an initial weight prediction model according to sample classification labels based on each sample image and corresponding to each sample image to obtain a weight prediction model, namely obtaining a neural network shown in the graph;
7) Inputting the image to be identified into a weight prediction model to obtain the probability that the image to be identified respectively belongs to classification labels of at least two image identification models, namely the class 1 probability and the class 2 probability … … class n probability shown in the figure;
8) Determining a weight coefficient of each image recognition model based on the probability that the image to be recognized respectively belongs to the classification labels of at least two image recognition models;
9) Respectively identifying images to be identified based on at least two image identification models to obtain a prediction label of each image identification model; and determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively by at least two image recognition models.
According to the image processing method, the weight prediction models are used for predicting the weight coefficients aiming at the images to be recognized respectively for at least two image recognition models, the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model aiming at the images to be recognized, the final recognition result is determined based on the weight coefficients of different image recognition models and the prediction label of each image recognition model, the prediction label of the image recognition model with higher recognition accuracy can be emphasized, and the prediction label of the image recognition model with lower recognition accuracy is ignored, so that the accuracy of the final recognition result is improved.
Furthermore, the initial weight prediction model is trained through each sample image and the image recognition model with highest recognition accuracy aiming at each sample image, so that the weight prediction model can output a weight coefficient which is in direct proportion to the recognition accuracy of the image recognition model aiming at the image to be recognized, and the accuracy of a final recognition result is improved.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes:
a first determining module 701, configured to obtain an image to be identified, determine weight coefficients of at least two image identification models respectively for the image to be identified based on the weight prediction model; the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized;
The identifying module 702 is configured to identify the images to be identified based on at least two image identifying models, respectively, so as to obtain a prediction label of each image identifying model;
the second determining module 703 is configured to determine a recognition result of the image to be recognized based on the weight coefficients of the at least two image recognition models and the prediction labels of each image recognition model.
In one possible implementation, the method further includes a training module for:
acquiring a plurality of sample images; each sample image is provided with a corresponding sample standard mark;
respectively inputting sample images into at least two image recognition models aiming at each sample image to obtain sample prediction labels of each image recognition model;
determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model;
a weight prediction model is acquired for recognition accuracy of the sample image based on each image recognition model.
In one possible implementation manner, the training module is specifically configured to, when determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model:
Matching the sample prediction labels of each image recognition model with sample standard labels to obtain the matching degree of each sample prediction label;
and setting the matching degree of each sample prediction label as the identification accuracy of the corresponding image identification model.
In one possible implementation, the training module is specifically configured to, when acquiring the weight prediction model for the recognition accuracy of the sample image based on each image recognition model:
determining an image recognition model with highest recognition accuracy for the sample image based on the recognition accuracy of each image recognition model;
and acquiring a weight prediction model based on each sample image and aiming at the image recognition model with highest image recognition accuracy of each sample image.
In one possible implementation, the training module is specifically configured to, when acquiring the weight prediction model based on each sample image and for the image recognition model with the highest accuracy of image recognition for each sample:
for each sample image, setting the model type of the image recognition model with the highest recognition accuracy of the sample image as a sample classification label;
and training the initial weight prediction model based on each sample image and the sample classification label corresponding to each sample image to obtain the weight prediction model.
In one possible implementation manner, the first determining module 701 is specifically configured to, when determining, based on the weight prediction model, weight coefficients of at least two image recognition models for the images to be recognized, respectively:
inputting the image to be identified into a weight prediction model to obtain the probability that the image to be identified respectively belongs to the classification labels of at least two image identification models;
and determining the weight coefficient of each image recognition model based on the probability that the image to be recognized respectively belongs to the classification labels of at least two image recognition models.
In one possible implementation, for each image recognition model, the probability that the image to be recognized belongs to the classification label of the image recognition model is proportional to the weight coefficient of the image recognition model.
In one possible implementation manner, the second determining module 703 is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
and determining the weighted sum of the prediction labels of each image recognition model based on the weight coefficients of at least two image recognition models respectively aiming at the images to be recognized, and obtaining a recognition result.
In one possible implementation manner, the second determining module 703 is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
determining an image recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the image recognition models with the weight coefficients larger than the preset coefficients to obtain recognition results.
In one possible implementation manner, the second determining module 703 is specifically configured to, when determining the recognition result of the image to be recognized based on the weight coefficients of at least two image recognition models for the image to be recognized and the prediction labels of each image recognition model, respectively:
determining a preset number of image recognition models with highest weight coefficients from at least two image recognition models;
and fusing the predictive labels of the image recognition models with the highest weight coefficient and the preset number to obtain a recognition result.
According to the image processing device, the weight prediction models are used for predicting the weight coefficients aiming at the images to be recognized respectively for at least two image recognition models, the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model aiming at the images to be recognized, the final recognition result is determined based on the weight coefficients of different image recognition models and the prediction label of each image recognition model, the prediction label of the image recognition model with higher recognition accuracy can be emphasized, and the prediction label of the image recognition model with lower recognition accuracy is ignored, so that the accuracy of the final recognition result is improved.
Furthermore, the initial weight prediction model is trained through each sample image and the image recognition model with highest recognition accuracy aiming at each sample image, so that the weight prediction model can output a weight coefficient which is in direct proportion to the recognition accuracy of the image recognition model aiming at the image to be recognized, and the accuracy of a final recognition result is improved.
In another aspect, a computer device is provided, including a memory, a processor, and a computer program stored on the memory, the processor executing the computer program to implement the image processing method described above.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 8, the computer device includes: a memory and a processor; at least one program stored in the memory for execution by the processor, which, when executed by the processor, performs:
the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized, the final recognition result is determined based on the weight coefficients of different image recognition models and the prediction label of each image recognition model, the prediction label of the image recognition model with higher recognition accuracy can be emphasized, and the prediction label of the image recognition model with lower recognition accuracy is ignored, so that the accuracy of the final recognition result is improved.
In an alternative embodiment, a computer device is provided, as shown in fig. 8, the computer device 800 shown in fig. 8 comprising: a processor 801 and a memory 803. The processor 801 is coupled to a memory 803, such as via a bus 802. Optionally, the computer device 800 may also include a transceiver 804, and the transceiver 804 may be used for data interaction between the computer device and other computer devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 804 is not limited to one, and the structure of the computer device 800 is not limited to the embodiment of the present application.
The processor 801 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 801 may also be a combination of computing functions, e.g., including one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 802 may include a path to transfer information between the aforementioned components. Bus 802 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect labeling) bus or an EISA (Extended Industry Standard Architecture, extended industry labeling architecture) bus, among others. Bus 802 may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
The Memory 803 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 803 is used for storing application program codes (computer programs) for executing the inventive arrangements and is controlled to be executed by the processor 801. The processor 801 is configured to execute application code stored in the memory 803 to implement what is shown in the foregoing method embodiment.
Wherein the computer device includes, but is not limited to: virtualized computer devices, virtual machines, servers, service clusters, user terminals, and the like.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the respective contents of the image processing method in the foregoing method embodiment.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image processing method described above.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module is not limited to the module itself in some cases, and for example, the first determining module may also be described as "a module for determining a weight coefficient".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (14)

1. An image processing method, the method comprising:
acquiring an image to be identified, and determining weight coefficients of at least two image identification models respectively aiming at the image to be identified based on a weight prediction model;
the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized;
respectively identifying the images to be identified based on the at least two image identification models to obtain a prediction label of each image identification model;
and determining the recognition result of the image to be recognized based on the at least two image recognition models respectively aiming at the weight coefficient of the image to be recognized and the prediction label of each image recognition model.
2. The image processing method according to claim 1, wherein the weight prediction model is trained based on:
acquiring a plurality of sample images; each sample image is provided with a corresponding sample standard mark;
for each sample image, respectively inputting the sample images into the at least two image recognition models to obtain sample prediction labels of each image recognition model;
determining the recognition accuracy of each image recognition model for the sample image based on the sample standard label and the sample prediction label of each image recognition model;
the weight prediction model is acquired for the recognition accuracy of the sample image based on each of the image recognition models.
3. The image processing method according to claim 2, wherein the determining the recognition accuracy of each of the image recognition models for the sample image based on the sample standard label and the sample prediction label of each of the image recognition models includes:
matching the sample prediction labels of each image recognition model with the sample standard labels to obtain the matching degree of each sample prediction label;
And setting the matching degree of each sample prediction label as the identification accuracy of the corresponding image identification model.
4. The image processing method according to claim 2, wherein the acquiring the weight prediction model for the recognition accuracy of the sample image based on each of the image recognition models includes:
determining the image recognition model with highest recognition accuracy for the sample image based on the recognition accuracy of each image recognition model;
and acquiring the weight prediction model based on each sample image and aiming at the image recognition model with highest image recognition accuracy of each sample image.
5. The image processing method according to claim 4, wherein the acquiring the weight prediction model for the image recognition model with the highest recognition accuracy for each sample image based on each sample image includes:
for each sample image, setting the model type of the image recognition model with the highest recognition accuracy of the sample image as a sample classification label;
and training an initial weight prediction model based on each sample image and a sample classification label corresponding to each sample image to obtain the weight prediction model.
6. The image processing method according to claim 5, wherein the determining, based on the weight prediction model, weight coefficients of at least two image recognition models for the image to be recognized, respectively, includes:
inputting the image to be identified into the weight prediction model to obtain the probability that the image to be identified respectively belongs to the classification labels of the at least two image identification models;
the weight coefficient of each image recognition model is determined based on the probability that the image to be recognized respectively belongs to the classification labels of the at least two image recognition models.
7. The image processing method according to claim 6, wherein, for each of the image recognition models, the probability that the image to be recognized belongs to a classification label of the image recognition model is proportional to the weight coefficient of the image recognition model.
8. The image processing method according to claim 1, wherein the determining the recognition result of the image to be recognized based on the at least two image recognition models for the weight coefficient of the image to be recognized and the prediction label of each image recognition model, respectively, includes:
And determining a weighted sum of the prediction labels of each image recognition model according to the weight coefficient of the image to be recognized based on the at least two image recognition models, and obtaining the recognition result.
9. The image processing method according to claim 1, wherein the determining the recognition result of the image to be recognized based on the at least two image recognition models for the weight coefficient of the image to be recognized and the prediction label of each image recognition model, respectively, includes:
determining an image recognition model with a weight coefficient larger than a preset coefficient from at least two image recognition models;
and fusing the prediction labels of the image recognition models with the weight coefficients larger than the preset coefficients to obtain the recognition results.
10. The image processing method according to claim 1, wherein the determining the recognition result of the image to be recognized based on the at least two image recognition models for the weight coefficient of the image to be recognized and the prediction label of each image recognition model, respectively, includes:
determining a preset number of image recognition models with highest weight coefficients from at least two image recognition models;
And fusing the predictive labels of the image recognition models with the highest weight coefficient and the preset number to obtain the recognition result.
11. An image processing apparatus, characterized in that the apparatus comprises:
the first determining module is used for acquiring an image to be identified, and determining weight coefficients of at least two image identification models respectively aiming at the image to be identified based on the weight prediction model; the weight coefficient of each image recognition model is in direct proportion to the recognition accuracy of the image recognition model for the image to be recognized;
the identification module is used for respectively identifying the images to be identified based on the at least two image identification models to obtain a prediction label of each image identification model;
and the second determining module is used for determining the recognition result of the image to be recognized based on the weight coefficient of the image to be recognized and the prediction label of each image recognition model respectively.
12. A computer device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the image processing method of any of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method of any one of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the image processing method of any of claims 1 to 10.
CN202210217587.6A 2022-03-07 2022-03-07 Image processing method, apparatus, computer device, storage medium, and program product Pending CN116777814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210217587.6A CN116777814A (en) 2022-03-07 2022-03-07 Image processing method, apparatus, computer device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210217587.6A CN116777814A (en) 2022-03-07 2022-03-07 Image processing method, apparatus, computer device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN116777814A true CN116777814A (en) 2023-09-19

Family

ID=88008591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210217587.6A Pending CN116777814A (en) 2022-03-07 2022-03-07 Image processing method, apparatus, computer device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN116777814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314909A (en) * 2023-11-29 2023-12-29 无棣源通电子科技有限公司 Circuit board defect detection method, device, equipment and medium based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314909A (en) * 2023-11-29 2023-12-29 无棣源通电子科技有限公司 Circuit board defect detection method, device, equipment and medium based on artificial intelligence
CN117314909B (en) * 2023-11-29 2024-02-09 无棣源通电子科技有限公司 Circuit board defect detection method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111767405B (en) Training method, device, equipment and storage medium of text classification model
CN110363220B (en) Behavior class detection method and device, electronic equipment and computer readable medium
CN110942072A (en) Quality evaluation-based quality scoring and detecting model training and detecting method and device
KR20190029083A (en) Apparatus and Method for learning a neural network
CN113537070B (en) Detection method, detection device, electronic equipment and storage medium
CN111368634B (en) Human head detection method, system and storage medium based on neural network
CN111797769B (en) Small-target-sensitive vehicle detection system
CN112420125A (en) Molecular attribute prediction method and device, intelligent equipment and terminal
CN112712068A (en) Key point detection method and device, electronic equipment and storage medium
CN114330588A (en) Picture classification method, picture classification model training method and related device
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN112149689A (en) Unsupervised domain adaptation method and system based on target domain self-supervised learning
CN117217368A (en) Training method, device, equipment, medium and program product of prediction model
CN115757692A (en) Data processing method and device
WO2022100607A1 (en) Method for determining neural network structure and apparatus thereof
CN116777814A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN114385846A (en) Image classification method, electronic device, storage medium and program product
CN116805387B (en) Model training method, quality inspection method and related equipment based on knowledge distillation
CN116579345A (en) Named entity recognition model training method, named entity recognition method and named entity recognition device
CN116152576A (en) Image processing method, device, equipment and storage medium
CN115131291A (en) Object counting model training method, device, equipment and storage medium
CN114005017A (en) Target detection method and device, electronic equipment and storage medium
CN111401112A (en) Face recognition method and device
CN116797859A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN115238805B (en) Training method of abnormal data recognition model and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination