CN111612096A - Large-scale fundus image classification system training method based on Spark platform - Google Patents

Large-scale fundus image classification system training method based on Spark platform Download PDF

Info

Publication number
CN111612096A
CN111612096A CN202010484386.3A CN202010484386A CN111612096A CN 111612096 A CN111612096 A CN 111612096A CN 202010484386 A CN202010484386 A CN 202010484386A CN 111612096 A CN111612096 A CN 111612096A
Authority
CN
China
Prior art keywords
training
frog
neural network
convolutional neural
frogs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010484386.3A
Other languages
Chinese (zh)
Inventor
丁卫平
任龙杰
丁嘉陆
李铭
孙颖
冯志豪
张毅
鞠恒荣
曹金鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202010484386.3A priority Critical patent/CN111612096A/en
Publication of CN111612096A publication Critical patent/CN111612096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a large-scale fundus image classification system training method based on a Spark platform, which comprises the following steps: s10 setting parameters necessary for executing the training of the distributed convolutional neural network; s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm; s30, training the convolutional neural network by using the stored standard image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; and S40 storing the trained convolutional neural network model. According to the training method of the large-scale fundus image classification system based on the Spark platform, the mixed frog leap algorithm is adopted to generate the initial network weight, the distributed parallel training of the convolutional neural network is realized through the grouping optimization strategy, and the high efficiency and the classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved.

Description

Large-scale fundus image classification system training method based on Spark platform
Technical Field
The invention relates to the technical field of medical big data, in particular to a large-scale fundus image classification system training method based on a Spark platform.
Background
With the information-based construction of medical health, the size and type of medical big data are growing at a very fast speed, wherein medical images and audio-visual data account for a large proportion. Fundus images are a basic and noninvasive means for retinal examination in medical images, and with the rapid increase in the amount of fundus image data, novel large-data processing methods are urgently needed to classify and study large-scale fundus image data.
In recent years, the development of convolutional neural networks is rapid, and the convolutional neural networks have certain advantages when applied to the field of image classification. However, when large-scale fundus image data is faced, training and image classification of the convolutional neural network will consume a large amount of time, and the effect often does not meet the actual requirement.
Disclosure of Invention
In order to solve the problems, the invention provides a training method of a large-scale fundus image classification system based on a Spark platform, wherein a mixed frog-leaping algorithm is adopted to generate a network initial weight, distributed parallel training of a convolutional neural network is realized through a grouping optimization strategy, and the high efficiency and classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved.
In order to achieve the above purpose, the invention adopts a technical scheme that:
a large-scale fundus image classification system training method based on a Spark platform comprises the following steps: s10 setting parameters necessary for executing the training of the distributed convolution neural network, wherein the parameters comprise training step number (S) and model name (dname); s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm; s30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; s40 storing the trained convolutional neural network model; the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
Further, the step S20 includes: s21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program; s22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
Further, the step S22 includes the following steps: s221 initializing the fundus image dataset, defining a maximum number of training times LmaxThe frog group is generated by standard normal distribution according to the frog number m and the frog population number n in the mixed frog-leaping algorithm; s222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
Figure BDA0002518467940000021
wherein p isThe network output value is shown, t represents a real value, s represents the dimension of each group of pathological change labels, b represents the number of types of the retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts; s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value; s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the frogs with the optimal global fitness are fqb(ii) a S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×R a(0n,1),d
fnew=fw+D,
Figure BDA0002518467940000031
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of frog leap, Rand () representing a random number function, exp () representing an exponential function with a natural constant e as a base; s226, the main node collects all frogs and mixes all frogs; and S227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the operation of the algorithm, and carrying out the optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
Further, the step S30 selects fundus images with the same training step number from the large-scale fundus image data, divides the fundus images into two data sets, performs distributed calculation through the Spark frame, trains and summarizes network weights for each data set, starts a distributed frog leap optimization task by the master node, and finds out an optimal frog as an initial weight for next packet weight training.
Further, the step S30 includes the following steps: s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
Figure BDA0002518467940000032
wherein w represents the number of servers used for distributed computation, and t' represents the number of summaries; s32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
Figure BDA0002518467940000033
wherein w is more than or equal to 2;
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time; s34, judging whether the current grouping time t 'meets t' or not, if yes, jumping to S32, otherwise executing S35; and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the training method of the large-scale fundus image classification system based on the Spark platform, the mixed frog leap algorithm is adopted to generate the initial network weight, the distributed parallel training of the convolutional neural network is realized through the grouping optimization strategy, and the high efficiency and the classification accuracy of large-scale fundus images during convolutional neural network training can be effectively improved. The large-scale fundus image classification system provides intelligent processing analysis of related diseases, mining and extracting of related disease rules and knowledge and the like for large-scale fundus images, provides an effective service platform for developing intelligent decision support for related diseases, provides intelligent processing analysis of related diseases and mining of the disease rules, and has important significance and value.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating a large-scale fundus image classification system training method based on a Spark platform according to an embodiment of the present invention;
FIG. 2 is a flow chart of a large scale fundus image classification system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, a large-scale fundus image classification system training method based on a Spark platform is provided, as shown in fig. 1 to 2, including the following steps: s10 sets the parameters necessary to perform the training of the distributed convolutional neural network, including the number of training steps (S) and the model name (dname). And S20 calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm. And S30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network. S40 storing the trained convolutional neural network model; the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
The step S20 includes: and S21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program.
S22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
The step S22 includes the following steps: s221 initializing the fundus image dataset, defining a maximum number of training times LmaxAnd in the mixed frog-leaping algorithm, the frog group is generated by standard normal distribution according to the frog number m and the frog population n.
S222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
Figure BDA0002518467940000051
wherein p represents the network output value, t represents the true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of the retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts.
S223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbThe updated adaptation value is replaced.
S224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the global adaptation is carried out at the momentThe degree-optimal frog is fqb
S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
Figure BDA0002518467940000061
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresents the updated frog, D represents the distance of the frog jump, Rand () is a random number function, exp () represents an exponential function with a natural constant e as the base.
S226 the main node collects all frogs and mixes all frogs. And
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
The step S30 is to select fundus images with the same training step number from the large-scale fundus image data, divide the fundus images into two data sets, perform distributed calculation through the Spark frame, train and summarize network weights for each data set, start a distributed leapfrog optimization task by the master node, and find out an optimal frog as an initial weight for next packet weight training. The training data sets are grouped for the second time, multiple distributed calculation and summarizing frog-leaping optimization operation are executed, and the accuracy of the classification system is effectively improved.
The step S30 includes the following steps: s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
Figure BDA0002518467940000071
where w represents the number of servers used for distributed computation and t' represents the number of summaries.
S32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
Figure BDA0002518467940000072
wherein w is more than or equal to 2;
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34 judges whether the current grouping time t ' satisfies t ' is less than or equal to t ', if yes, the execution of step S32 is skipped, otherwise, the execution of step S35 is executed. And
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
Example 1
S10 the user sets the training steps S10000 and the model name dname q1 for performing the training of the distributed convolutional neural network.
The S21 classification system backend Java program receives the information passed by the front end, calls spark _ sfcnn. py, and substitutes the user input S10000, and dname q 1.
S221 initializing the fundus image dataset, defining a maximum number of training times Lmax10000 frog-leaping algorithm, wherein the number m of the frog is 50 and the population number n is 10, and the frog group is generated through standard normal distribution.
S222, calculating adaptive values of 50 frogs in parallel through a Spark frame, randomly selecting x-200 images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
Figure BDA0002518467940000081
wherein p represents a network output value, t represents a true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts;
s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value;
s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into 10 populations, wherein the frogs with the optimal global fitness are fqb
S225, the distributed parallel local search operation of 10 populations is realized through a Spark framework, each population is used as an RDD (elastic distributed data set) partition, the positions of frogs with the worst fitness are updated in the populations through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
Figure BDA0002518467940000082
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of the frog jump, Rand () being a function of a random number, exp () representing the frog in natureAn exponential function with constant e as base;
s226, the main node collects all frogs and mixes all frogs; and
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
S31 performs secondary grouping according to the input total running step number S of 10000, sets the total number t 'of 5, w of 2, and obtains m' of 200 according to the formula.
S32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining 10 different network weights through respective independent training, and summarizing the 10 network weights to a master node based on a Spark platform.
S33 taking the 10 networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34, judging whether the current grouping time t' meets 5 times, if yes, jumping to S32, otherwise executing S35; and
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
The above description is only an exemplary embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes that are transformed by the content of the present specification and the attached drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (5)

1. A large-scale fundus image classification system training method based on a Spark platform is characterized by comprising the following steps:
s10 setting parameters necessary for executing the training of the distributed convolution neural network, wherein the parameters comprise training step number (S) and model name (dname);
s20, calling the convolutional neural network algorithm program, substituting the parameters into the algorithm program, and generating an initial weight value during convolutional neural network training through a distributed leapfrog algorithm;
s30, training the convolutional neural network by using the stored large-scale fundus image data, finding out the optimal frog, taking the optimal frog as the initial weight of the next packet weight training, and finishing the training of the convolutional neural network; and
s40, storing the trained convolutional neural network model;
the convolutional neural network model trained in the steps S10-S40 is the large-scale fundus image classification system.
2. The large scale fundus image classification system training method based on Spark platform according to claim 1, wherein said step S20 includes:
s21 calling the convolutional neural network algorithm program, and substituting the parameters into the algorithm program;
s22 generating m frogs by standard normal distribution, calculating the adaptive value of each frog by Spark frame, collecting, sorting and dividing population, circularly searching locally and re-dividing population by mixed frogs until the mixed frogs algorithm meets the convergence condition to obtain global optimum frogs fqA 1 is to fqAs initial weights of the convolutional neural network.
3. The training method of the large-scale fundus image classification system based on the Spark platform according to claim 2, wherein said step S22 comprises the steps of:
s221 initializing the fundus image dataset, defining a maximum number of training times LmaxThe frog group is generated by standard normal distribution according to the frog number m and the frog population number n in the mixed frog-leaping algorithm;
s222, calculating adaptive values of m frogs in parallel through a Spark frame, randomly selecting x images from the fundus image dataset as reference images, and then substituting each frog into the convolutional neural network for forward propagation through parallel calculation to calculate the adaptive values, wherein the adaptive value calculation formula is as follows:
Figure FDA0002518467930000011
wherein p represents a network output value, t represents a true value, s' represents the dimension of each group of pathological change labels, b represents the number of types of retinopathy needing to be detected simultaneously, and i, j and k are integer subscripts;
s223, loss value calculation is carried out on the frogs which do not calculate the loss value, the frogs are brought into the convolutional neural network, loss values of all images are calculated in parallel, final main nodes are gathered and summed to obtain adaptive values, and if the adaptive values after updating are larger than those before updating, the optimal frogs f are usedbReplacing the updated adaptation value;
s224, sorting all frogs in ascending order according to the fitness function, and dividing the frogs into n populations, wherein the frogs with the optimal global fitness are fqb
S225, n population distributed parallel local search operations are realized through a Spark frame, each population is taken as an RDD partition, the positions of frogs with the worst fitness degree are updated in the population through a position updating function, and the position updating function formula is as follows:
D=(fb+fp-fw)×Rand(0,1),
fnew=fw+D,
Figure FDA0002518467930000021
wherein f ispRepresenting an offset, the dimension of which is the same as that of each frog, fpiDenotes fpValue in the ith dimension, fnewRepresenting the updated frog, D representing the distance of frog leap, Rand () representing a random number function, exp () representing an exponential function with a natural constant e as a base;
s226, the main node collects all frogs and mixes all frogs; and
s227, judging whether the mixed frog-leaping algorithm meets the convergence condition, if so, stopping the algorithm, and carrying out optimal frog fqbAnd (3) taking the value as the initial weight of the convolutional neural network and storing the initial weight to a specified directory, otherwise, turning to the step (S223) and continuing to execute until the mixed frog-leaping algorithm meets the convergence condition to find the optimal frog.
4. The training method of large-scale fundus image classification system based on Spark platform as claimed in claim 1, wherein said step S30 is selecting fundus images with same training steps from said large-scale fundus image data, dividing said fundus images into two data sets, performing distributed computation through said Spark frame, training network weights and summarizing each data set as unit, starting distributed frog leap optimization task by master node, finding out optimal frog, as initial weight for next grouping weight training.
5. The training method of the large-scale fundus image classification system based on the Spark platform according to claim 4, wherein said step S30 comprises the steps of:
s31, secondary grouping is carried out according to the input total running steps S, and the calculation formula of the running steps m' of each group is as follows:
Figure FDA0002518467930000031
wherein w represents the number of servers used for distributed computation, and t' represents the number of summaries;
s32, taking a group as a unit, reading the weight under the large-scale fundus image data appointed directory as an initial weight, executing training convolutional neural network weights, finally obtaining n ' different network weights through respective independent training, summarizing the n ' network weights to a master node based on a Spark platform, wherein n ' satisfies the following formula:
Figure FDA0002518467930000032
wherein w is more than or equal to 2;
s33 taking the n' networks as initial frogs, optimizing by a mixed frog-leaping algorithm, and finding out the global optimal frogs fqbStoring the weight parameters of the optimal frogs to a designated directory of the large-scale fundus image data, and covering the file stored last time;
s34, judging whether the current grouping time t 'meets t' or not, if yes, jumping to S32, otherwise executing S35; and
and S35, ending the mixed frog leaping algorithm operation and returning to the training completion state.
CN202010484386.3A 2020-06-01 2020-06-01 Large-scale fundus image classification system training method based on Spark platform Pending CN111612096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010484386.3A CN111612096A (en) 2020-06-01 2020-06-01 Large-scale fundus image classification system training method based on Spark platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010484386.3A CN111612096A (en) 2020-06-01 2020-06-01 Large-scale fundus image classification system training method based on Spark platform

Publications (1)

Publication Number Publication Date
CN111612096A true CN111612096A (en) 2020-09-01

Family

ID=72201056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010484386.3A Pending CN111612096A (en) 2020-06-01 2020-06-01 Large-scale fundus image classification system training method based on Spark platform

Country Status (1)

Country Link
CN (1) CN111612096A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255889A (en) * 2021-05-26 2021-08-13 安徽理工大学 Occupational pneumoconiosis multi-modal analysis method based on deep learning
CN113971367A (en) * 2021-08-27 2022-01-25 天津大学 Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871995A (en) * 2019-02-02 2019-06-11 浙江工业大学 The quantum optimization parameter adjustment method of distributed deep learning under Spark frame
CN110929775A (en) * 2019-11-18 2020-03-27 南通大学 Convolutional neural network weight optimization method for retinopathy classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871995A (en) * 2019-02-02 2019-06-11 浙江工业大学 The quantum optimization parameter adjustment method of distributed deep learning under Spark frame
CN110929775A (en) * 2019-11-18 2020-03-27 南通大学 Convolutional neural network weight optimization method for retinopathy classification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张强 等: "自适应分组混沌云模型蛙跳算法求解连续空间优化问题", 《控制与决策》 *
王宇 等: "基于混合蛙跳优化神经网络的轴承故障诊断研究", 《机械传动》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255889A (en) * 2021-05-26 2021-08-13 安徽理工大学 Occupational pneumoconiosis multi-modal analysis method based on deep learning
CN113971367A (en) * 2021-08-27 2022-01-25 天津大学 Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm

Similar Documents

Publication Publication Date Title
CN113407759B (en) Multi-modal entity alignment method based on adaptive feature fusion
Liu et al. Genetic evaluation of fertility traits of dairy cattle using a multiple-trait animal model
Nakhleh et al. Towards the development of computational tools for evaluating phylogenetic network reconstruction methods
CN107018184A (en) Distributed deep neural network cluster packet synchronization optimization method and system
CN113656596B (en) Multi-modal entity alignment method based on triple screening fusion
EP3646252A1 (en) Selective training for decorrelation of errors
CN105260171B (en) A kind of generation method and device of virtual item
CN111612096A (en) Large-scale fundus image classification system training method based on Spark platform
TW201835789A (en) Method and device for constructing scoring model and evaluating user credit
CN111967971A (en) Bank client data processing method and device
US8170963B2 (en) Apparatus and method for processing information, recording medium and computer program
Soto et al. Using autonomous search for generating good enumeration strategy blends in constraint programming
US20200292340A1 (en) Robot running path, generation method, computing device and storage medium
US20210312295A1 (en) Information processing method, information processing device, and information processing program
US20120173468A1 (en) Medical data prediction method using genetic algorithms
CN113015219B (en) Network resource selection method and device based on strategy gradient and storage medium
CN110825522A (en) Spark parameter self-adaptive optimization method and system
CN111582450A (en) Neural network model training method based on parameter evaluation and related device
US7991617B2 (en) Optimum design management apparatus from response surface calculation and method thereof
CN111814965A (en) Hyper-parameter adjusting method, device, equipment and storage medium
CN108399105A (en) A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm
CN108415773B (en) Efficient software and hardware partitioning method based on fusion algorithm
JP5687122B2 (en) Software evaluation device, software evaluation method, and system evaluation device
CN110879778A (en) Novel dynamic feedback and improved patch evaluation software automatic restoration method
CN111242347A (en) Bridge management and maintenance aid decision-making system based on historical weight updating

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200901