CN113705737B - Extensible optimal test image set generation method based on search - Google Patents

Extensible optimal test image set generation method based on search Download PDF

Info

Publication number
CN113705737B
CN113705737B CN202111260522.1A CN202111260522A CN113705737B CN 113705737 B CN113705737 B CN 113705737B CN 202111260522 A CN202111260522 A CN 202111260522A CN 113705737 B CN113705737 B CN 113705737B
Authority
CN
China
Prior art keywords
test image
image set
population
test
optimal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111260522.1A
Other languages
Chinese (zh)
Other versions
CN113705737A (en
Inventor
徐思涵
郑佳音
王志煜
蔡祥睿
李君龙
李梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN202111260522.1A priority Critical patent/CN113705737B/en
Publication of CN113705737A publication Critical patent/CN113705737A/en
Application granted granted Critical
Publication of CN113705737B publication Critical patent/CN113705737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an extensible optimal test image set generation method based on search.A plurality of test images are randomly selected from a test pool to form a test image set, and then an initial current population is formed by the plurality of test image sets; calculating the coverage rate and scale of each test image set in the current population, and selecting the optimal test image set as the population of the next iteration; taking the population of the next iteration selected as a parent, and performing operation by using a cross operator to obtain a new population; carrying out image layer and image set layer variation on the obtained new population to obtain a variant offspring individual; and selecting the obtained offspring individuals with the coverage rate higher than that of the parent individuals or the offspring individuals with the same coverage rate but smaller scale as the current population of the next iteration, and then repeating the steps. The method can generate the optimal test image set with small scale, high coverage rate and strong expandability.

Description

Extensible optimal test image set generation method based on search
Technical Field
The invention belongs to the field of deep learning test, and particularly relates to a search-based expandable optimal test image set generation method.
Background
As deep learning is increasingly deployed into specific applications, researchers are increasingly paying attention to the correctness and reliability of deep learning applications, and testing systems for deep learning are also in force. The system detects possible erroneous behavior of the deep learning system by generating test data and validating with a test set. For example, for testing an image classification deep learning convolutional neural network model, the test image set of most current test systems requires manual labeling or input, which is certainly time-consuming and labor-consuming. Therefore, there is a strong need for an automated method that can generate an optimal coverage test image set.
A traditional test image set generation method is based on differential test, and can automatically obtain a cross-reference test prediction by observing different performances of a plurality of deep learning systems with the same function, but the method has larger limitation due to the requirement of the same function.
The other method is metamorphic mutation, and the method ensures that the semantics are unchanged before and after mutation by setting strict constraint conditions. However, without full formal practice, the mutants need to be manually checked to eliminate false positives, a step that still requires much human involvement. Therefore, there is a need for a method for automatically generating a test image set with low labeling cost and high test coverage.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a search-based expandable optimal test image set generation method, which can generate an optimal test image set with small scale, high coverage rate and strong expandability.
The invention is realized by the following technical scheme:
a method for generating an expandable optimal test image set based on search comprises the following steps:
step S1, a plurality of test images are randomly selected from the test pool to form a test image set, and then the plurality of test image sets form an initial current population;
step S2, for each test image set individual in the current population, calculating the corresponding coverage rate of each test image set under the image classification deep neural network system to be tested and the scale of each test image set, and selecting M optimal test image sets in the current population as the population of the next iteration by using a tournament selection method according to the steps S21-S26;
step S21, initializing the population of the next iteration as an empty set;
step S22, randomly selecting from the current population
Figure 130941DEST_PATH_IMAGE001
A test image set as an contestant of the tournament selection method;
step S23, sorting the selected competitors from small to large according to the scale of the test image set to obtain a sorted competitor list;
step S24, sorting the sorted election competitor lists according to the coverage rate from large to small to obtain a secondarily sorted election competitor list;
step S25, according to the secondarily sorted competitor list obtained in step S24, selecting the test image set with the first rank in the list as a winner to enter a population of the next iteration;
step S26, repeating the step S22-the step S25 for M times to obtain M optimal test image sets as a population of the next iteration;
step S3, taking the population of the next iteration selected in the step S2 as a parent, and performing operation by using a cross operator to obtain a new population;
step S4, performing mutation operation on the test image layer and the test image set layer on the new population obtained in the step S3 to obtain a mutated offspring individual; wherein, the mutation operation of the test image set level means: for each test image set, randomly reducing one test image or randomly increasing one test image from a test pool by a first set probability; the mutation operation of the test image layer is as follows: for each test image set, selecting one test image from the test image set according to a second set probability to perform affine transformation and/or pixel transformation;
step S5, selecting the current population which is obtained in step S4 and has the coverage rate higher than that of the parent individuals or the coverage rate which is the same as that of the parent individuals but smaller in scale to form the next iteration, and then repeating the steps S2-S5; and (4) presetting an iteration termination condition according to the allocated time and space resources, judging whether the iteration termination condition is met or not in each iteration, stopping if the iteration termination condition is met, and otherwise, continuing to perform the subsequent steps.
In the above technical solution, in step S2, the measurement is calculated according to the following formulaAttempting to cover the individual image set
Figure 397975DEST_PATH_IMAGE002
Figure 596875DEST_PATH_IMAGE003
Wherein N represents the total neuron set of the deep neural network to be tested, N represents the total neuron set of the deep neural network to be testedNL represents the number of neurons in the deep neural network to be tested,kthe number of intervals into which the value of each neuron is divided is represented,
Figure 78803DEST_PATH_IMAGE004
indicating that the test image T covers the ith interval in the test image set individual T,
Figure 72167DEST_PATH_IMAGE005
is a set of k intervals into which the neuron n is sliced.
In the above technical solution, step S3 is performed according to the following steps:
step S31, grouping the test image sets in the next iteration population obtained in the step S2 into two groups;
step S32, for two test image sets in the same group, a value between 0 and 1 is randomly selected as a percentage threshold, the test image data before the percentage threshold in each test image set is kept unchanged, and the test image data after the percentage threshold is subjected to equal sequence interchange between the two test image sets.
In the above technical solution, the affine transformation includes: translation, zoom, flip, rotate, and shear; the pixel transformation performs transformation of pixel point values by adjusting contrast and brightness.
In the foregoing technical solution, step S4 further includes setting constraint conditions for mutation, where the constraint conditions for mutation include that a rotation angle in image transformation is less than 15 degrees, affine transformation is allowed at most once, and a pixel ratio of each mutation cannot exceed 2%.
The invention also provides a computer-readable storage medium, in which a computer program is stored which, when being executed, carries out the above-mentioned method steps.
The invention has the advantages and beneficial effects that:
the invention creatively provides an extensible optimal test image set generation framework based on search, which generates a test image set aiming at an image classification neural network with the coverage rate as high as possible and the scale as small as possible by modifying a genetic algorithm and utilizing iterative operation of selection, intersection and mutation operators. Particularly, by setting the fitness giving consideration to both the coverage rate and the scale of the test image set, the invention innovatively achieves the optimal balance under two incompatible targets of high coverage rate and small scale, thereby ensuring that the test of the image classification neural network model can be completed as comprehensively as possible with the minimum labeling cost.
Drawings
Fig. 1 is a flowchart of a search-based scalable optimal test image set generation method of the present invention.
For a person skilled in the art, other relevant figures can be obtained from the above figures without inventive effort.
Detailed Description
In order to make the technical solution of the present invention better understood, the technical solution of the present invention is further described below with reference to specific examples.
Example one
Referring to fig. 1, a method for generating a search-based expandable optimal test image set includes the following steps:
step S1, a test pool P is formed by a plurality of unmarked test images, a plurality of test images are randomly selected from the test pool P to form a test image set, and a plurality of test image sets T are used1,T2 ......TnConstituting the initial current population.
Step S2, for each test image set individual in the current population, calculating the corresponding coverage rate of each test image set under the deep neural network system to be tested and the scale of each test image set so as to obtain the fitness of each test image set, and selecting M test image set individuals with the best fitness as the population of the next iteration; specifically, step S2 is performed as follows:
step S21, for each individual T of the test image sets in the current population, calculating the coverage rate cov (T) of each test image set according to the selected evaluation index; in this example, KMNC is selected as an index for measuring the coverage, which is defined as the ratio of the coverage fraction to the total number of parts, and the coverage cov (T) of the individual T of the test image set is calculated according to the following formula:
Figure 888813DEST_PATH_IMAGE003
wherein N represents the total neuron set of the deep neural network to be tested, N represents the total neuron set of the deep neural network to be testedNL represents the number of neurons in the deep neural network to be tested,kthe number of intervals into which the value of each neuron is divided is represented,
Figure 625825DEST_PATH_IMAGE004
indicating that the test image T covers the ith interval in the test image set individual T,
Figure 149210DEST_PATH_IMAGE005
is a set of k intervals into which neuron n is sliced;
step S22, calculating the number of the test images of each test image set individual T in the current population to obtain len (T);
step S23, calculating the fitness of each test image set according to the coverage index of each test image set and the number of the test images calculated in steps S21 and S22, that is, for a given test image set T, the coverage cov (T) of each test image set calculated in steps S21 and S22 and the number len (T) of the test images are merged (i.e. two values are merged into a tuple) to form a fitness tuple fitness (T) = (cov (T), len (T));
step S24, selecting a certain number of best test image sets with good fitness as the population of the next iteration by using a championship algorithm, specifically, the step S24 is performed according to the following steps:
step S241, initializing the population of the next iteration as an empty set;
step S242, randomly selecting from the current population
Figure 884779DEST_PATH_IMAGE001
A test image set as an contestant of the tournament selection method;
step S243, sorting the selected competitors in ascending order according to the scale of the test image set to obtain a sorted competitor list;
step S244, sorting the sorted election competitor list according to the coverage rate to obtain a secondarily sorted election competitor list;
step S245, according to the election list obtained in step S244 after secondary sorting, selecting a test image set ranked first in the list as a winner to enter a next iteration population, namely selecting an individual with the best fitness to enter the next iteration population;
and step S246, repeating the steps S242 to S245 for M times, so as to obtain M optimal test image sets as the population of the next iteration.
Step S3, taking the population of the next iteration selected in the step S2 as a parent, and performing operation by using a cross operator to obtain a new population; specifically, step S3 is performed as follows:
step S31, grouping the test image sets in the next iteration population obtained in step S2 in pairs to obtain
Figure 126405DEST_PATH_IMAGE006
,
Figure 463845DEST_PATH_IMAGE007
), … ,
Figure 107316DEST_PATH_IMAGE008
Step S32, for two test image sets in the same group, a value between 0 and 1 is randomly selected as a percentage threshold, the test image data in front of the percentage threshold in each test image set is kept unchanged, and the test image data behind the percentage threshold are exchanged between the two test image sets in an equal sequence to obtain a new population. For example, if each test image set includes 100 test image data, and 0.5 is selected as the percentage threshold (i.e., 50%), then the data of the first 50 test images of each test image set in the same group remains unchanged, and the data of the last 50 test images are interchanged in equal order between the two test image sets, e.g., the 51 st test image data in one test image set is interchanged with the 51 st test image data in the other test image set, the 52 nd test image data in one test image set is interchanged with the 52 nd test image data in the other test image set, and so on.
Step S4, performing variation operation on the test image level and the test image set level on the new population obtained in the previous step to obtain a variant offspring individual; the method comprises the following steps:
step S41, for each test image set in the population, randomly increasing or decreasing the test images to implement the variation on the test image set level, specifically: for each test image set in the population, randomly selecting a test image from the test pool with a probability of ε (e.g., randomly selecting a test image from the test pool with a probability of 50% to add, i.e., a 50% probability would be selected for a test image and a 50% probability would not be selected for either; for each test image set in the population, randomly deleting a test image therefrom with a probability of η (e.g., randomly deleting a test image therefrom with a probability of 30%, i.e., a 30% likelihood that a test image will be deleted, a 70% likelihood that neither will be deleted; if a test image is deleted, the test image is a random one of the test image sets);
step S42, randomly mutating the test image of each individual in the population obtained in step S41; specifically, step S42 includes:
step S421, selecting one of the test images with a probability of ξ for each test image set in the population obtained in step S41, and performing step S422 (that is, one of the test images may not be selected and at most one is selected);
step S422, performing image mutation on the test image selected in step S421, where the common methods of image mutation are affine transformation and pixel transformation, the affine transformation is a linear transformation from two-dimensional coordinates to two-dimensional coordinates, and the affine transformation can be implemented by compounding a series of atomic transformations, including: translation (Translation), scaling (Scale), Flip (Flip), Rotation (Rotation), and cut (Shear); the pixel transformation performs transformation of pixel point values by adjusting contrast and brightness.
And step S423, making a proper constraint condition to ensure that the test images before and after the mutation are still highly similar. Suitable constraint conditions include that the rotation angle in image transformation is less than 15 degrees, affine transformation is allowed at most once, and the percentage of pixels mutated at each time cannot exceed 2%. In specific implementation, appropriate constraint conditions can be formulated according to actual conditions and optimization objectives.
Step S5, selecting the current population which is obtained in the last step and has the coverage rate higher than that of the parent individuals or the coverage rate which is the same as that of the parent individuals but smaller in scale to form the next iteration, and then repeating the steps S2-S5; and presetting a proper iteration termination condition according to the allocated time and space resources, judging whether the iteration termination condition is met or not in each iteration, stopping if the iteration termination condition is met, and otherwise, continuing to perform the subsequent steps.
Example two
On the basis of the above embodiment, the search-based extensible optimal test image set generation method of the present invention is described and verified below with reference to specific experimental data.
Currently, common public image datasets are MNIST, CIFAR-10, and the like.
The MNIST image data set comprises 60000 training images and 10000 training images, each image is a black and white image of a handwritten number, and for the data set, the LeNet image classification neural network model is trained in the embodiment, LeNet is a classical image classification deep learning convolution neural network model, and the commonly used versions comprise: LeNet-1, LeNet-4 and LeNet-5.
The CIFAR-10 image data set is composed of 50000 training images and 10000 training images, each image is a color image, and for the data set, two image classification deep learning convolutional neural network models of VGG-16 and ResNet-20 are trained in the embodiment.
To verify the effect of the search-based expandable optimal test image set generation method (DS method), the present embodiment uses 5 test indexes (NC, KMNC, NBC, SNAC, TKNC) to perform experiments on the MNIST image data set and the CIFAR-10 image data set to verify that the optimal trade-off can be achieved on two incompatible objectives, i.e., high test fullness and small test image set size. Experiments were performed on a total of 5 image classification deep neural network models, namely LeNet-1, LeNet-4, LeNet-5, which were experimented on the MNIST image dataset, and VGG-16 and ResNet-20, which were experimented on the CIFAR-10 image dataset.
Moreover, comparison experiments are carried out on the DS method of the invention and three methods of a traditional gradient-based method, a greedy algorithm (the maximum experiment frequency is 80) and a k-clustering algorithm (k is 10 in the experiment) in an experimental test image set consisting of 5000 test images randomly selected from an original data set. Each method was run for 30 rounds. For the DS method of the present invention, the number of individuals in the initial current population is 80, and each individual contains 6 test image inputs randomly selected from the test pool. The probability of mutation operation, including addition, deletion and mutation, is 0.3, and the DS method and the greedy algorithm of the present invention are set to the same mutation constraint condition for controlling the variables. The experimental data are shown in the following table:
Figure 255532DEST_PATH_IMAGE009
in the table, M: a class of deep neural network models representing image classification; m1: representing a LeNet-1 image classification depth neural network model; m2: representing a LeNet-4 image classification depth neural network model; m3: representing a LeNet-5 image classification depth neural network model; m4: a deep neural network model representing ResNet-20 image classification; m5: representing a VGG-16 image classification deep neural network model; mtd: representing the kind of test image set generation method; CL: the Clustering Algorithm represents a k-Clustering Algorithm; GE: the Greedy Algorithm, namely The Greedy Algorithm; GA: the Gradient-based Algorithm, represents a Gradient-based approach; and (2) DS: namely, the expandable optimal test image set generation method DeepsuiteAlgorithm based on search of the invention; NC, KMNC, NBC, SNAC, TKNC are some common coverage indicators, where NC: NeuronCoverage, which represents the neuron coverage rate and represents the percentage of the number of neurons covered by the test image set T in the total number of neurons; KMNC: ratio of fraction covered to total number of fractions; NBC, neuro boundarycoverage, neuron boundary coverage, represents the percentage of the number of neurons whose neuron values exceed the upper and lower bounds of the training set relative to the total number of neurons; SNAC, Strong Neuron Activation Coverage; TKNC: top-k Neuron Coverage, which indicates the presence of images from the test image set T, is such that the number of neurons having a value of k, which is the maximum of the Neuron values in the current layer, is a percentage of the total number of neurons.
The previous data in each data box in the table represents the highest coverage (%), and the second data represents the test image set size. For example: 28.8 (19) in the first data box, 28.8 indicates the highest coverage is 28.8%, and 19 indicates the test image set size is 19.
As can be seen from the table, the DS method of the present invention achieves the highest coverage in 21 of 25 sample cases, so that the test image sets generated by the DS method are more diverse than those generated by other methods. For example, the index advantage of the DS method on NC coverage rate indicates that the test image set obtained by the DS method can activate neurons which are not activated by the test image set obtained by some other method.
Meanwhile, the DS method of the invention achieves the aim of considering both high coverage rate and minimizing the scale of the test image set in 13 cases of 25 samples. Furthermore, it can be seen that in some cases, the DS method does not obtain the smallest-scale test image set, but the coverage rate is much higher than that of the test image sets obtained by other methods.
The invention has been described in an illustrative manner, and it is to be understood that any simple variations, modifications or other equivalent changes which can be made by one skilled in the art without departing from the spirit of the invention fall within the scope of the invention.

Claims (6)

1. A method for generating an expandable optimal test image set based on search is characterized by comprising the following steps:
step S1, a plurality of test images are randomly selected from the test pool to form a test image set, and then the plurality of test image sets form an initial current population;
step S2, for each test image set individual in the current population, calculating the corresponding coverage rate of each test image set under the image classification deep neural network system to be tested and the scale of each test image set, and selecting M optimal test image sets in the current population as the population of the next iteration by using a tournament selection method according to the steps S21-S26;
step S21, initializing the population of the next iteration as an empty set;
step S22, randomly selecting from the current population
Figure 273541DEST_PATH_IMAGE001
A test image set as an contestant of the tournament selection method;
step S23, sorting the selected competitors from small to large according to the scale of the test image set to obtain a sorted competitor list;
step S24, sorting the sorted election competitor lists according to the coverage rate from large to small to obtain a secondarily sorted election competitor list;
step S25, according to the secondarily sorted competitor list obtained in step S24, selecting the test image set with the first rank in the list as a winner to enter a population of the next iteration;
step S26, repeating the step S22-the step S25 for M times to obtain M winners serving as an optimal test image set as a population of the next iteration;
step S3, taking the population of the next iteration selected in the step S2 as a parent, and performing operation by using a cross operator to obtain a new population;
step S4, performing mutation operation on the test image layer and the test image set layer on the new population obtained in the step S3 to obtain a mutated offspring individual; wherein, the variation operation of the test image set level is as follows: for each test image set, randomly reducing one test image or randomly increasing one test image from a test pool by a first set probability; the mutation operation of the test image layer is as follows: for each test image set, selecting one test image from the test image set according to a second set probability to perform affine transformation and/or pixel transformation;
step S5, selecting the current population which is obtained in step S4 and has the coverage rate higher than that of the parent individuals or the coverage rate which is the same as that of the parent individuals but smaller in scale to form the next iteration, and then repeating the steps S2-S5; and (4) presetting an iteration termination condition according to the allocated time and space resources, judging whether the iteration termination condition is met or not in each iteration, stopping if the iteration termination condition is met, and otherwise, continuing to perform the subsequent steps.
2. The search-based scalable optimal test image set generation method of claim 1, wherein: in step S2, the coverage of the test image set is calculated according to the following formula
Figure 252998DEST_PATH_IMAGE002
Figure 32735DEST_PATH_IMAGE003
Wherein N represents the total neuron set of the image classification deep neural network system to be tested, N represents the total neuron set of the image classification deep neural network system to be testedNL represents the number of neurons in the image classification deep neural network system to be tested,kthe number of intervals into which the value of each neuron is divided is represented,
Figure 427944DEST_PATH_IMAGE004
indicating that the test image T covers the ith interval in the test image set individual T,
Figure 55366DEST_PATH_IMAGE005
is a set of k intervals into which the neuron n is sliced.
3. The search-based scalable optimal test image set generation method of claim 1, wherein: step S3 proceeds as follows:
step S31, grouping the test image sets in the next iteration population obtained in the step S2 into two groups;
step S32, for two test image sets in the same group, a value between 0 and 1 is randomly selected as a percentage threshold, the test image data before the percentage threshold in each test image set is kept unchanged, and the test image data after the percentage threshold is subjected to equal sequence interchange between the two test image sets.
4. The search-based scalable optimal test image set generation method of claim 1, wherein: the affine transformation includes: image translation, image scaling, image flipping, image rotation, and image cropping; the pixel transformation performs transformation of pixel point values by adjusting contrast and brightness.
5. The search-based scalable optimal test image set generation method of claim 1, wherein: step S4 further includes setting constraint conditions of variation, where the constraint conditions of variation include that a rotation angle in image transformation is less than 15 degrees, affine transformation is allowed at most once, and a ratio of pixels of each variation is not more than 2%.
6. A computer-readable storage medium, characterized in that a computer program is stored which, when executed, realizes the steps of the method according to any one of claims 1 to 5.
CN202111260522.1A 2021-10-28 2021-10-28 Extensible optimal test image set generation method based on search Active CN113705737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111260522.1A CN113705737B (en) 2021-10-28 2021-10-28 Extensible optimal test image set generation method based on search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111260522.1A CN113705737B (en) 2021-10-28 2021-10-28 Extensible optimal test image set generation method based on search

Publications (2)

Publication Number Publication Date
CN113705737A CN113705737A (en) 2021-11-26
CN113705737B true CN113705737B (en) 2021-12-24

Family

ID=78647298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111260522.1A Active CN113705737B (en) 2021-10-28 2021-10-28 Extensible optimal test image set generation method based on search

Country Status (1)

Country Link
CN (1) CN113705737B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914527A (en) * 2014-03-28 2014-07-09 西安电子科技大学 Graphic image recognition and matching method based on genetic programming algorithms of novel coding modes
CN103942571A (en) * 2014-03-04 2014-07-23 西安电子科技大学 Graphic image sorting method based on genetic programming algorithm
CN110458213A (en) * 2019-07-29 2019-11-15 四川大学 A kind of disaggregated model robust performance appraisal procedure
CN111898689A (en) * 2020-08-05 2020-11-06 中南大学 Image classification method based on neural network architecture search
WO2020245556A1 (en) * 2019-06-05 2020-12-10 The Secretary Of State For Defence Obtaining patterns for surfaces of objects
CN113128432A (en) * 2021-04-25 2021-07-16 四川大学 Multi-task neural network architecture searching method based on evolutionary computation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942571A (en) * 2014-03-04 2014-07-23 西安电子科技大学 Graphic image sorting method based on genetic programming algorithm
CN103914527A (en) * 2014-03-28 2014-07-09 西安电子科技大学 Graphic image recognition and matching method based on genetic programming algorithms of novel coding modes
WO2020245556A1 (en) * 2019-06-05 2020-12-10 The Secretary Of State For Defence Obtaining patterns for surfaces of objects
CN110458213A (en) * 2019-07-29 2019-11-15 四川大学 A kind of disaggregated model robust performance appraisal procedure
CN111898689A (en) * 2020-08-05 2020-11-06 中南大学 Image classification method based on neural network architecture search
CN113128432A (en) * 2021-04-25 2021-07-16 四川大学 Multi-task neural network architecture searching method based on evolutionary computation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自助法的遗传编程泛化能力;曹波等;《计算机工程与设计》;20170331;第38卷(第3期);第768-772页 *

Also Published As

Publication number Publication date
CN113705737A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110880019B (en) Method for adaptively training target domain classification model through unsupervised domain
CN110781406B (en) Social network user multi-attribute inference method based on variational automatic encoder
CN110837836A (en) Semi-supervised semantic segmentation method based on maximized confidence
CN115331088B (en) Robust learning method based on class labels with noise and imbalance
CN105975573A (en) KNN-based text classification method
CN111611486B (en) Deep learning sample labeling method based on online education big data
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN110929621B (en) Road extraction method based on topology information refinement
CN111931801B (en) Dynamic route network learning method based on path diversity and consistency
Wang et al. The monkeytyping solution to the youtube-8m video understanding challenge
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
Wolters et al. Simulated annealing model search for subset selection in screening experiments
CN106156857A (en) The method and apparatus selected for mixed model
CN113705737B (en) Extensible optimal test image set generation method based on search
CN114556364A (en) Neural architecture search based on similarity operator ordering
Chen et al. Domain-generalized textured surface anomaly detection
CN115661542A (en) Small sample target detection method based on feature relation migration
CN113221964B (en) Single sample image classification method, system, computer device and storage medium
CN113128556B (en) Deep learning test case sequencing method based on mutation analysis
Zargarbashi et al. Conformal inductive graph neural networks
CN114120367A (en) Pedestrian re-identification method and system based on circle loss measurement under meta-learning framework
Nivin et al. Exploring the effects of class-specific augmentation and class coalescence on deep neural network performance using a novel road feature dataset
CN113762324A (en) Virtual object detection method, device, equipment and computer readable storage medium
CN107577681A (en) A kind of terrain analysis based on social media picture, recommend method and system
CN111369124A (en) Image aesthetic prediction method based on self-generation global features and attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xu Sihan

Inventor after: Zheng Jiayin

Inventor after: Fan Lingling

Inventor after: Wang Zhiyu

Inventor after: Cai Xiangrui

Inventor after: Li Junlong

Inventor after: Li Mei

Inventor before: Xu Sihan

Inventor before: Zheng Jiayin

Inventor before: Wang Zhiyu

Inventor before: Cai Xiangrui

Inventor before: Li Junlong

Inventor before: Li Mei

CB03 Change of inventor or designer information