CN111612096A - Large-scale fundus image classification system training method based on Spark platform - Google Patents
Large-scale fundus image classification system training method based on Spark platform Download PDFInfo
- Publication number
- CN111612096A CN111612096A CN202010484386.3A CN202010484386A CN111612096A CN 111612096 A CN111612096 A CN 111612096A CN 202010484386 A CN202010484386 A CN 202010484386A CN 111612096 A CN111612096 A CN 111612096A
- Authority
- CN
- China
- Prior art keywords
- training
- frog
- neural network
- convolutional neural
- frogs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 57
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 241000269350 Anura Species 0.000 claims description 63
- 230000003044 adaptive effect Effects 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000006978 adaptation Effects 0.000 claims description 5
- 208000017442 Retinal disease Diseases 0.000 claims description 4
- 206010038923 Retinopathy Diseases 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 4
- 238000005192 partition Methods 0.000 claims description 4
- 230000036285 pathological change Effects 0.000 claims description 4
- 231100000915 pathological change Toxicity 0.000 claims description 4
- 230000009191 jumping Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 201000010099 disease Diseases 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a large-scale fundus image classification system training method based on a Spark platform, which comprises the following steps: s10 setting parameters necessary for executing the training of the distributed convolutional neural network; s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm; s30, training the convolutional neural network by using the stored standard image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; and S40 storing the trained convolutional neural network model. According to the training method of the large-scale fundus image classification system based on the Spark platform, the mixed frog leap algorithm is adopted to generate the initial network weight, the distributed parallel training of the convolutional neural network is realized through the grouping optimization strategy, and the high efficiency and the classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved.
Description
Technical Field
The invention relates to the technical field of medical big data, in particular to a large-scale fundus image classification system training method based on a Spark platform.
Background
With the information-based construction of medical health, the size and type of medical big data are growing at a very fast speed, wherein medical images and audio-visual data account for a large proportion. Fundus images are a basic and noninvasive means for retinal examination in medical images, and with the rapid increase in the amount of fundus image data, novel large-data processing methods are urgently needed to classify and study large-scale fundus image data.
In recent years, the development of convolutional neural networks is rapid, and the convolutional neural networks have certain advantages when applied to the field of image classification. However, when large-scale fundus image data is faced, training and image classification of the convolutional neural network will consume a large amount of time, and the effect often does not meet the actual requirement.
Disclosure of Invention
In order to solve the problems, the invention provides a training method of a large-scale fundus image classification system based on a Spark platform, wherein a mixed frog-leaping algorithm is adopted to generate a network initial weight, distributed parallel training of a convolutional neural network is realized through a grouping optimization strategy, and the high efficiency and classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved.
In order to achieve the above purpose, the invention adopts a technical scheme that:
a large-scale fundus image classification system training method based on a Spark platform comprises the following steps: s10 setting parameters necessary for executing the training of the distributed convolution neural network, wherein the parameters comprise training step number (S) and model name (dname); s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm; s30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; s40 storing the trained convolutional neural network model; the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
Further, the step S20 includes: s21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program; s22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
Further, the step S22 includes the following steps: s221 initializing the fundus image dataset, defining a maximum number of training times LmaxThe frog group is generated by standard normal distribution according to the frog number m and the frog population number n in the mixed frog-leaping algorithm; s222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
wherein p isThe network output value is shown, t represents a real value, s represents the dimension of each group of pathological change labels, b represents the number of types of the retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts; s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value; s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the frogs with the optimal global fitness are fqb(ii) a S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×R a(0n,1),d
fnew=fw+D,
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of frog leap, Rand () representing a random number function, exp () representing an exponential function with a natural constant e as a base; s226, the main node collects all frogs and mixes all frogs; and S227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the operation of the algorithm, and carrying out the optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
Further, the step S30 selects fundus images with the same training step number from the large-scale fundus image data, divides the fundus images into two data sets, performs distributed calculation through the Spark frame, trains and summarizes network weights for each data set, starts a distributed frog leap optimization task by the master node, and finds out an optimal frog as an initial weight for next packet weight training.
Further, the step S30 includes the following steps: s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
wherein w represents the number of servers used for distributed computation, and t' represents the number of summaries; s32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time; s34, judging whether the current grouping time t 'meets t' or not, if yes, jumping to S32, otherwise executing S35; and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the training method of the large-scale fundus image classification system based on the Spark platform, the mixed frog leap algorithm is adopted to generate the initial network weight, the distributed parallel training of the convolutional neural network is realized through the grouping optimization strategy, and the high efficiency and the classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved. The large-scale fundus image classification system provides intelligent processing analysis of related diseases, mining and extracting of related disease rules and knowledge and the like for large-scale fundus images, provides an effective service platform for developing intelligent decision support for related diseases, provides intelligent processing analysis of related diseases and mining of the disease rules, and has important significance and value.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating a large-scale fundus image classification system training method based on a Spark platform according to an embodiment of the present invention;
FIG. 2 is a flow chart of a large scale fundus image classification system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, a large-scale fundus image classification system training method based on a Spark platform is provided, as shown in fig. 1 to 2, including the following steps: s10 sets the parameters necessary to perform the training of the distributed convolutional neural network, including the number of training steps (S) and the model name (dname). And S20 calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm. And S30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network. S40 storing the trained convolutional neural network model; the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
The step S20 includes: and S21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program.
S22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
The step S22 includes the following steps: s221 initializing the fundus image dataset, defining a maximum number of training times LmaxAnd in the mixed frog-leaping algorithm, the frog group is generated by standard normal distribution according to the frog number m and the frog population n.
S222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
wherein p represents the network output value, t represents the true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of the retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts.
S223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbThe updated adaptation value is replaced.
S224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the global adaptation is carried out at the momentThe degree-optimal frog is fqb。
S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresents the updated frog, D represents the distance of the frog jump, Rand () is a random number function, exp () represents an exponential function with a natural constant e as the base.
S226 the main node collects all frogs and mixes all frogs. And
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
The step S30 is to select fundus images with the same training step number from the large-scale fundus image data, divide the fundus images into two data sets, perform distributed calculation through the Spark frame, train and summarize network weights for each data set, start a distributed leapfrog optimization task by the master node, and find out an optimal frog as an initial weight for next packet weight training. The training data sets are grouped for the second time, multiple distributed calculation and summarizing frog-leaping optimization operation are executed, and the accuracy of the classification system is effectively improved.
The step S30 includes the following steps: s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
where w represents the number of servers used for distributed computation and t' represents the number of summaries.
S32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34 judges whether the current grouping time t ' satisfies t ' is less than or equal to t ', if yes, the execution of step S32 is skipped, otherwise, the execution of step S35 is executed. And
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
Example 1
S10 the user sets the training steps S10000 and the model name dname q1 for performing the training of the distributed convolutional neural network.
The S21 classification system backend Java program receives the information passed by the front end, calls spark _ sfcnn. py, and substitutes the user input S10000, and dname q 1.
S221 initializing the fundus image dataset, defining a maximum number of training times Lmax10000 frog-leaping algorithm, wherein the number m of the frog is 50 and the population number n is 10, and the frog group is generated through standard normal distribution.
S222, calculating adaptive values of 50 frogs in parallel through a Spark frame, randomly selecting x-200 images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
wherein p represents a network output value, t represents a true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts;
s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value;
s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into 10 populations, wherein the frogs with the optimal global fitness are fqb;
S225, the distributed parallel local search operation of 10 populations is realized through a Spark framework, each population is used as an RDD (elastic distributed data set) partition, the positions of frogs with the worst fitness are updated in the populations through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of the frog jump, Rand () being a function of a random number, exp () representing the frog in natureAn exponential function with constant e as base;
s226, the main node collects all frogs and mixes all frogs; and
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
S31 performs secondary grouping according to the input total running step number S of 10000, sets the total number t 'of 5, w of 2, and obtains m' of 200 according to the formula.
S32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining 10 different network weights through respective independent training, and summarizing the 10 network weights to a master node based on a Spark platform.
S33 taking the 10 networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34, judging whether the current grouping time t' meets 5 times, if yes, jumping to S32, otherwise executing S35; and
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
The above description is only an exemplary embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes that are transformed by the content of the present specification and the attached drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (5)
1. A large-scale fundus image classification system training method based on a Spark platform is characterized by comprising the following steps:
s10 setting parameters necessary for executing the training of the distributed convolution neural network, wherein the parameters comprise training step number (S) and model name (dname);
s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm;
s30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; and
s40, storing the trained convolutional neural network model;
the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
2. The large scale fundus image classification system training method based on Spark platform according to claim 1, wherein said step S20 includes:
s21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program;
s22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
3. The training method of the large-scale fundus image classification system based on the Spark platform according to claim 2, wherein said step S22 comprises the steps of:
s221 initializing the fundus image dataset, defining a maximum number of training times LmaxThe frog group is generated by standard normal distribution according to the frog number m and the frog population number n in the mixed frog-leaping algorithm;
s222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
wherein p represents a network output value, t represents a true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts;
s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value;
s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the frogs with the optimal global fitness are fqb;
S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of frog leap, Rand () representing a random number function, exp () representing an exponential function with a natural constant e as a base;
s226, the main node collects all frogs and mixes all frogs; and
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
4. The training method of large-scale fundus image classification system based on Spark platform as claimed in claim 1, wherein said step S30 is selecting fundus images with same training steps from said large-scale fundus image data, dividing said fundus images into two data sets, performing distributed computation through said Spark frame, training network weights and summarizing each data set as unit, starting distributed frog leap optimization task by master node, finding out optimal frog, as initial weight for next grouping weight training.
5. The training method of the large-scale fundus image classification system based on the Spark platform according to claim 4, wherein said step S30 comprises the steps of:
s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
wherein w represents the number of servers used for distributed computation, and t' represents the number of summaries;
s32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34, judging whether the current grouping time t 'meets t' or not, if yes, jumping to S32, otherwise executing S35; and
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010484386.3A CN111612096A (en) | 2020-06-01 | 2020-06-01 | Large-scale fundus image classification system training method based on Spark platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010484386.3A CN111612096A (en) | 2020-06-01 | 2020-06-01 | Large-scale fundus image classification system training method based on Spark platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111612096A true CN111612096A (en) | 2020-09-01 |
Family
ID=72201056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010484386.3A Pending CN111612096A (en) | 2020-06-01 | 2020-06-01 | Large-scale fundus image classification system training method based on Spark platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612096A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255889A (en) * | 2021-05-26 | 2021-08-13 | 安徽理工大学 | Occupational pneumoconiosis multi-modal analysis method based on deep learning |
CN113971367A (en) * | 2021-08-27 | 2022-01-25 | 天津大学 | Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871995A (en) * | 2019-02-02 | 2019-06-11 | 浙江工业大学 | The quantum optimization parameter adjustment method of distributed deep learning under Spark frame |
CN110929775A (en) * | 2019-11-18 | 2020-03-27 | 南通大学 | Convolutional neural network weight optimization method for retinopathy classification |
-
2020
- 2020-06-01 CN CN202010484386.3A patent/CN111612096A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871995A (en) * | 2019-02-02 | 2019-06-11 | 浙江工业大学 | The quantum optimization parameter adjustment method of distributed deep learning under Spark frame |
CN110929775A (en) * | 2019-11-18 | 2020-03-27 | 南通大学 | Convolutional neural network weight optimization method for retinopathy classification |
Non-Patent Citations (2)
Title |
---|
张强 等: "自适应分组混沌云模型蛙跳算法求解连续空间优化问题", 《控制与决策》 * |
王宇 等: "基于混合蛙跳优化神经网络的轴承故障诊断研究", 《机械传动》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255889A (en) * | 2021-05-26 | 2021-08-13 | 安徽理工大学 | Occupational pneumoconiosis multi-modal analysis method based on deep learning |
CN113971367A (en) * | 2021-08-27 | 2022-01-25 | 天津大学 | Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113407759B (en) | Multi-modal entity alignment method based on adaptive feature fusion | |
Liu et al. | Genetic evaluation of fertility traits of dairy cattle using a multiple-trait animal model | |
Nakhleh et al. | Towards the development of computational tools for evaluating phylogenetic network reconstruction methods | |
CN107018184A (en) | Distributed deep neural network cluster packet synchronization optimization method and system | |
CN113656596B (en) | Multi-modal entity alignment method based on triple screening fusion | |
EP3646252A1 (en) | Selective training for decorrelation of errors | |
CN105260171B (en) | A kind of generation method and device of virtual item | |
CN111612096A (en) | Large-scale fundus image classification system training method based on Spark platform | |
TW201835789A (en) | Method and device for constructing scoring model and evaluating user credit | |
CN111967971A (en) | Bank client data processing method and device | |
US8170963B2 (en) | Apparatus and method for processing information, recording medium and computer program | |
Soto et al. | Using autonomous search for generating good enumeration strategy blends in constraint programming | |
US20200292340A1 (en) | Robot running path, generation method, computing device and storage medium | |
US20210312295A1 (en) | Information processing method, information processing device, and information processing program | |
US20120173468A1 (en) | Medical data prediction method using genetic algorithms | |
CN113015219B (en) | Network resource selection method and device based on strategy gradient and storage medium | |
CN110825522A (en) | Spark parameter self-adaptive optimization method and system | |
CN111582450A (en) | Neural network model training method based on parameter evaluation and related device | |
US7991617B2 (en) | Optimum design management apparatus from response surface calculation and method thereof | |
CN111814965A (en) | Hyper-parameter adjusting method, device, equipment and storage medium | |
CN108399105A (en) | A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm | |
CN108415773B (en) | Efficient software and hardware partitioning method based on fusion algorithm | |
JP5687122B2 (en) | Software evaluation device, software evaluation method, and system evaluation device | |
CN110879778A (en) | Novel dynamic feedback and improved patch evaluation software automatic restoration method | |
CN111242347A (en) | Bridge management and maintenance aid decision-making system based on historical weight updating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200901 |