CN113781397B - Medical image focus detection modeling method, device and system based on federal learning - Google Patents

Medical image focus detection modeling method, device and system based on federal learning Download PDF

Info

Publication number
CN113781397B
CN113781397B CN202110918283.8A CN202110918283A CN113781397B CN 113781397 B CN113781397 B CN 113781397B CN 202110918283 A CN202110918283 A CN 202110918283A CN 113781397 B CN113781397 B CN 113781397B
Authority
CN
China
Prior art keywords
local
image
focus
client
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110918283.8A
Other languages
Chinese (zh)
Other versions
CN113781397A (en
Inventor
葛仕明
鲍可欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202110918283.8A priority Critical patent/CN113781397B/en
Publication of CN113781397A publication Critical patent/CN113781397A/en
Application granted granted Critical
Publication of CN113781397B publication Critical patent/CN113781397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a medical image focus detection modeling method, device and system based on federal learning, comprising the following steps: the global server S generates global parameter omega 0 Sent to each local focus recognition client C k The method comprises the steps of carrying out a first treatment on the surface of the Client C is identified by K local focuses k Returned detection head network parametersGenerating global parameter omega θ+1 The method comprises the steps of carrying out a first treatment on the surface of the Will be global parameter omega θ+1 Sent to each local focus recognition client C k So as to obtain a corresponding medical image focus detection model. According to the method, training information of the intermediate model is transmitted to the data holder through codes, the respective data information is not required to be shared, and the model is integrated through a corresponding strategy, so that better training and predicting results are returned.

Description

Medical image focus detection modeling method, device and system based on federal learning
Technical Field
The invention belongs to the field of computer vision and deep learning, and particularly relates to a medical image focus detection modeling method, device and system based on federal learning.
Background
Nowadays, health consciousness of people is increasingly increased, medical information is acquired with increasing importance, medical image data is expanded in an explosive manner, and intelligent medical treatment also enters the sight of people. Based on these image data, deep learning techniques mine underlying rules and information therein, create accurate models, diagnose and predict disease. In the early stage, a deep network trained by ImageNet is applied to a medical image task after being subjected to fine adjustment by using a migration learning mode, so that the convergence speed of a model is increased and the accuracy is improved. Medical datasets often lack sufficient tag data and processing limited datasets with data enhancement is one approach to solving this problem. Wherein, the synthetic image enhancement GAN improves the performance of using convolutional neural networks in liver lesion classification. At the same time, resistance learning and attention mechanisms are also widely used in medical imaging tasks. Identifying whether the sample is derived from the model or from the data by a discriminator in the resistance learning mode; the attention mechanism is to focus on the places where attention is needed when describing image information. The combination of these two techniques with other networks allows a given task to achieve higher performance. The deep learning algorithm is widely applied to medical images, provides purified world medical knowledge for all people, provides required information support for medical workers and patients, reduces misdiagnosis, advanced diagnosis and excessive diagnosis, improves the efficiency and accuracy of doctors, and has achieved remarkable success in the aspect of auxiliary diagnosis and treatment.
At the same time, people pay more attention to protection of own data privacy, which results in great obstruction to realizing real interconnection and intercommunication between data of all institutions. Based on this, federal learning has been proposed that enables multiple clients to complete training of a machine learning model with the assistance of one server.
Although the deep learning technology has been significantly successful in medical imaging tasks, most of the deep learning technologies are modeled in a centralized training manner. Many medical institutions operate under strict privacy practices and may face legal, administrative, or ethical restrictions, which result in centralized collection of medical data, sharing of patient information by multiple parties, which is often not feasible, i.e., there is a medical "data islanding" problem. The centralized modeling obviously cannot realize effective medical privacy protection, and meanwhile, the model accuracy is still affected by the shortage of the data volume of medical data and the large quantity of tags. The federal learning is realized by combining autonomy and union, training information of the intermediate model is transmitted into a data holder through codes, and a model is integrated through a corresponding strategy, so that better training and predicting results are returned. On one hand, modeling can be realized under the condition of not sharing data information of all parties, joint modeling under private learning is realized, on the other hand, knowledge of all parties can be efficiently converged, the accuracy of a model is improved, and the model is more stable and universal.
Disclosure of Invention
In order to overcome the defects of the prior art and realize better protection and model performance of medical privacy, the invention provides a medical image focus detection modeling method, device and system based on federal learning, which are different from the traditional method for realizing target detection and classification analysis by centralized learning.
In order to solve the technical problems, the invention is realized by the following technical scheme.
Medical image focus detection modeling method based on federal learning, suitable for a global server S and N local focus recognition clients C k The system is composed of N is more than or equal to 2, and the steps comprise:
1) The global server S generates global parameter omega 0 Sent to each local focus recognition client C k And selects a certain proportion of local focus recognition clients C k
2) Identifying client C using the K local lesions selected k Returned detection head network parametersGenerating global parameter omega θ+1 Wherein K is less than or equal to N, theta is the number of federal rounds, and generating a detection head network parameter +.>The method of (1) comprises: local lesion recognition client C k Based on global parameter omega θ-1 Training data set D k Training to generate a local model->To obtain the corresponding network parameters of the detecting head +.>
3) Will be global parameter omega θ+1 Sent to each local focus recognition client C k So that when the first preset condition is reached, each local focus recognition client C k And obtaining a corresponding medical image focus detection model.
Further, the detection head network parameter is a desensitization parameter.
Further, the methodGenerating global parameter omega θ+1 The method of (1) comprises: return-based detection head network parametersAnd the data volume duty ratio of each client is weighted average.
Further, the detection head network parameters are generated by the following steps
1) Each local lesion recognition client C k Collecting a training data set D k Wherein the training data set D k Comprises m data samples, each data sample comprising an image I m A label T m
2) For each image I m After preprocessing, scaling to a standard size to obtain an image I' m
3) Initializing the anchor frame of large, medium and small focus according to global parameter omega θ-1 Initializing network parameters;
4) Extracting each image I' m Image feature f of (2) m
5) By characterizing each image feature f m Sending the image data into a semantic integrator, and combining the image data into an image feature f 'after up-sampling and multiple convolutions' m
6) Image feature f' m The multi-scale detection head is transmitted in, and a prediction frame is output by combining the initial anchoring frame;
7) Associating prediction frames with tag-based T m After comparing the generated real frames, reversely updating;
8) Repeating the steps 3) -7) until reaching a second preset condition to obtain a local modelIs to detect header network parameters
Further, training the local modelIn this case, the anchor frame ratio is set to 1:2, and a plurality of sizes are set at the same time, and an exponential +.>Adjusting the positive and negative sample weights, wherein- (1-y) t ) γ For the adjustment factor, y is, in the case of a positive sample t =y, when the current is negative, y t =1-y, y e (0, 1) is the output of the activation function, γ+.gtoreq.0 is the adjustable focus parameter, +.>Is an equilibrium variable.
Further, the first preset condition includes: reaching the maximum federal round number, the loss of each medical image focus detection model is lower than a set threshold value or each local modelHas been robust.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when run.
An electronic device comprising a memory and a processor, wherein the memory stores a program for performing the above-described method.
A medical image lesion detection modeling system based on federal learning, comprising:
a global server for generating global parameter omega 0 Sent to each local focus recognition client C k The method comprises the steps of carrying out a first treatment on the surface of the Client C is identified by K local focuses k Returned detection head network parametersGenerating global parameter omega θ+1 And will be global parameter omega θ+1 Sent to each local focus recognition client C k K is less than or equal to N, and theta is federalThe number of wheels;
n local lesion recognition clients C k Wherein each local lesion recognition client C k Based on global parameter omega θ-1 Training data set D k Training to generate a local modelTo obtain the corresponding network parameters of the detecting head +.>And when the first preset condition is reached, acquiring a corresponding medical image focus detection model.
Further, the detection head network parameters are generated by the following steps
1) Each local lesion recognition client C k Collecting a training data set D k Wherein the training data set D k Comprises m data samples, each data sample comprising an image I m A label T m
2) For each image I m After preprocessing, scaling to a standard size to obtain an image I' m
3) Initializing the anchor frame of large, medium and small focus according to global parameter omega θ-1 Initializing network parameters;
4) Extracting each image I' m Image feature f of (2) m
5) By characterizing each image feature f m Sending the image data into a semantic integrator, and combining the image data into an image feature f 'after up-sampling and multiple convolutions' m
6) Image feature f' m The multi-scale detection head is transmitted in, and a prediction frame is output by combining the initial anchoring frame;
7) Associating prediction frames with tag-based T m After comparing the generated real frames, reversely updating;
8) Repeating the steps 3) -7) until reaching a second preset condition to obtain a local modelIs to detect header network parameters
The invention has the beneficial effects that: training information of the intermediate model is transmitted to the data holder through codes, the respective data information is not required to be shared, and the model is integrated through a corresponding strategy, so that better training and predicting results are returned. Especially under the conditions of medical data privacy and insufficient medical data samples, the method and the device can train a better deep learning model under the condition that private data information is not exposed, and have model precision similar to that of centralized training.
Drawings
Fig. 1 is a client-server schematic diagram of federal learning-based medical image intelligent analysis.
Fig. 2 is a flow chart of a local lesion detection model training phase.
Detailed Description
In order to make the above-mentioned scheme and beneficial effects of the present invention more comprehensible, the following detailed description is given by way of example with reference to the accompanying drawings.
The embodiment provides a medical image focus detection modeling method for federal learning and a device for realizing the method. The device comprises a global server S and a local focus identification client C k . The local focus recognition client comprises a feature extractor F, a semantic integrator T and a multi-scale detection head D.
Preparing training data sets of N institutions in advanceN is more than 2; each data set D k Consisting of M truly sensed data samples, typically each data set has a sample size M of several thousands; each sample contains a medical image I m And a label T m
The global server S is trained, as shown in fig. 1, specifically as follows:
the server side randomly initializes the global network parameters to omega 0 And transmitting the relevant super parameters;
after initialization setting is completed, starting federal learning of a theta-th round, selecting K clients with the proportion of frac from N local clients under N institutions by a server, wherein K=max (N is frac, 1), frac E (0, 1), and transmitting initialized parameters, training round number and other super parameters to each selected client;
each selected client performs local update, trains a local focus recognition model, performs local iterative update of the model, and performs local update of corresponding rounds of the model to remove the sensitivity parameter omega of a part of the network layer θ l Transmitting the data to a server;
the server merges network parameters of all selected clients as a client parameter setWherein θ is the current federal round number, and the client parameter set is weighted averaged and the global parameter ω is updated θ+1 Completing a round of federation iteration for all local clients;
the next iteration is performed until the iteration round reaches a preset maximum iteration round number or the loss is below a preset threshold or the model is already robust.
In the network parameter transmission process, only partial desensitizing network parameters are transmitted to reduce the communication time between client servers, better contain data isomerism, and simultaneously transmit important medical characteristics of the network, reduce redundant information and simultaneously ensure detection accuracy, and the method is concretely as follows:
the local client trains the local model, and as medical images are complex, and the target detection network is mostly a large-scale network, multiple rounds of local training can be performed during training so as to better learn required important knowledge;
when the local client transmits the parameters to the mobile terminal, the detection head parameter, i.e. the prediction layer parameter omega in the detection network is selected θ l Homomorphic encryption is performed, whichSome parameters can obtain important deep knowledge of the image;
after the server merges the parameters of all the selected servers, the selected servers decrypt the parameters, the total data of the selected clients is n, and the data of each client is n k In the case of (a), weighted average is performed according to the data amounts of different clientsIntegrating the obtained knowledge, and transmitting the knowledge back to all clients after encryption;
after decrypting the new network parameters, each client updates the corresponding detection header parameters.
Training local lesion recognition client C k As shown in fig. 2, the following is specific:
the obtained data set is preprocessed, and each ultrasonic image I is provided with m Data preprocessing is carried out on the image I and the image I is obtained by scaling the image I to the same standard size m ′,I m ' is w m Width h m High, channel a is an image, where a is single channel 1 or three channel 3, the parameters of which are expressed as < w m ,h m ,a>;
When training is started, the applied network is finely adjusted so as to better meet the medical characteristics, the proportion of the anchoring frame is 1:2, a plurality of sizes are set at the same time, the focus can be detected more accurately under rectangular representation, and as medical data always has the problem of uneven distribution of positive and negative samples, the exponential type can be used in a loss functionAdjusting positive and negative sample weights, wherein- (1-p) t ) γ For the adjustment factor, p is the positive sample t P, when the current is negative, p t =1-p, p e (0, 1) is the output of the activation function, γ+.gtoreq.0 is the adjustable focus parameter, +.>To balance the variables, the weight of the positive samples may be increased or decreased;
initializing an anchor frame for predicting large, medium and small focus and pre-trained network parameters;
image I m In the 'out-of-order' input network, firstly, slicing operation is carried out to obtain a characteristic diagram, and the parameter is less than w m /2,h m 2,4a > and then to a feature extractor F, which extracts the image feature F by multiple convolutions m
The obtained image feature f m Sending the image features into a semantic integrator, and combining the image features f 'by upsampling and convolving the image features with a plurality of times' m And the image feature f' m Passing to a prediction layer;
the characteristics are transmitted into a multi-scale detection head, a prediction target boundary frame and categories are generated on the basis of an initial anchoring frame, a plurality of target frames are screened, the output prediction frame and a real frame group-trunk are compared, the difference between the output prediction frame and the real frame group-trunk is calculated, and then the two frames are reversely updated, so that one round of iteration is completed;
and carrying out the next iteration until the iteration round reaches the preset maximum iteration round number or the model loss is lower than a preset threshold value.
The feature extractor, the semantic integrator and the multi-scale detection head can be realized by adopting the existing neural network structure; meanwhile, during federal learning, a model network used by a local client can be replaced according to an actual data set, other network layer parameters of the local model can be selected to be transmitted to a server, and only the parameters of a joint modeling part are updated, so that better privacy protection is realized, and more heterogeneous data is accommodated; in addition to taking a weighted average of the network weight parameters of some clients, a weighted average of the network losses may be taken or a different federal learning strategy may be selected. When the target detection network is applied to medicine, the fine adjustment is not limited to the adjustment of the anchoring frame, but can also be performed on the network layer and the network loss function, so that the target detection network is more suitable for medical tasks.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and those skilled in the art may modify or substitute the technical solution of the present invention without departing from the spirit and scope of the present invention, and the protection scope of the present invention shall be defined by the claims.

Claims (7)

1. Medical image focus detection modeling method based on federal learning, suitable for a global server S and N local focus recognition clients C k The system is composed of N is more than or equal to 2, and the steps comprise:
1) The global server S generates global parameter omega 0 Sent to each local focus recognition client C k And selecting a preset proportion of local focus recognition clients C k
2) Identifying client C using the K local lesions selected k Returned detection head network parametersGenerating global parameter omega θ+1 Wherein K is less than or equal to N, theta is the number of federal rounds, and generating a detection head network parameter +.>The method of (1) comprises:
2.1 Local lesion recognition client C) k Based on global parameter omega θ-1 Training data set D k Training to generate a local modelWherein training generates a local model +.>When the anchor frame ratio is 1:2, a plurality of sizes are set at the same time, and the exponential +.>Adjusting positive and negative sample weights, - (1-y) t ) γ For the adjustment factor, y is, in the case of a positive sample t =y,When the current is a negative sample, y t =1-y, y e (0, 1) is the output of the activation function, γ+.gtoreq.0 is the adjustable focus parameter,is a balance variable;
2.2 Each local lesion recognition client C) k Collecting a training data set D k Wherein the training data set D k Comprises m data samples, each data sample comprising an image I m A label T m
2.3 For each image I) m After preprocessing, scaling to a standard size to obtain an image I' m
2.4 Initializing the anchor frame of large, medium and small focus according to global parameter omega θ-1 Initializing network parameters;
2.5 Extracting each image I' m Image feature f of (2) m
2.6 For each image feature f m Sending the image data into a semantic integrator, and combining the image data into an image feature f 'after up-sampling and multiple convolutions' m
2.7 Image feature f' m The multi-scale detection head is transmitted in, and a prediction frame is output by combining the initial anchoring frame;
2.8 A) associating a prediction frame with a tag-based T m After comparing the generated real frames, reversely updating;
2.9 Repeating steps 2.4) -2.8) until a second preset condition is reached to obtain a local modelIs to detect header network parameters
3) Will be global parameter omega θ+1 Sent to each local focus recognition client C k So that when the first preset condition is reached, each local focus recognition client C k And obtaining a corresponding medical image focus detection model.
2. The method of claim 1, wherein the detection head network parameter is a desensitization parameter.
3. The method of claim 1, wherein a global parameter ω is generated θ+1 The method of (1) comprises: return-based detection head network parametersAnd the data volume duty ratio of each client is weighted average.
4. The method of claim 1, wherein the first preset condition comprises: reaching the maximum federal round number, the loss of each medical image focus detection model is lower than a set threshold value or each local modelHas been robust.
5. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1-4 when run.
6. An electronic device comprising a memory, in which a computer program is stored, and a processor arranged to run the computer program to perform the method of any of claims 1-4.
7. A medical image lesion detection modeling system based on federal learning, comprising:
a global server for generating global parameter omega 0 Sent to each local focus recognition client C k The method comprises the steps of carrying out a first treatment on the surface of the Client C is identified by K local focuses k Returned detection head network parametersGenerating global parameter omega θ+1 And will be global parameter omega θ+1 Sent to each local focus recognition client C k K is less than or equal to N, and theta is the number of federal rounds;
n local lesion recognition clients C k Each local lesion recognition client C k Based on global parameter omega θ-1 Training data set D k Training to generate a local modelTo obtain the corresponding network parameters of the detecting head +.>When the first preset condition is reached, a corresponding medical image focus detection model is obtained; wherein training generates a local model +.>When the anchor frame ratio is 1:2, a plurality of sizes are set at the same time, and the exponential +.>Adjusting positive and negative sample weights, - (1-y) t ) γ For the adjustment factor, y is, in the case of a positive sample t =y, when the current is negative, y t =1-y, y e (0, 1) is the output of the activation function, γ+.gtoreq.0 is the adjustable focus parameter, +.>Is a balance variable;
the corresponding network parameters of the detection head are obtainedComprising the following steps:
1) Each local lesion recognition client C k Collecting a training data set D k Wherein training is performedTraining dataset D k Comprises m data samples, each data sample comprising an image I m A label T m
2) For each image I m After preprocessing, scaling to a standard size to obtain an image I' m
3) Initializing the anchor frame of large, medium and small focus according to global parameter omega θ-1 Initializing network parameters;
4) Extracting each image I' m Image feature f of (2) m
5) By characterizing each image feature f m Sending the image data into a semantic integrator, and combining the image data into an image feature f 'after up-sampling and multiple convolutions' m
6) Image feature f' m The multi-scale detection head is transmitted in, and a prediction frame is output by combining the initial anchoring frame;
7) Associating prediction frames with tag-based T m After comparing the generated real frames, reversely updating;
8) Repeating the steps 3) -7) until reaching a second preset condition to obtain a local modelIs>
CN202110918283.8A 2021-08-11 2021-08-11 Medical image focus detection modeling method, device and system based on federal learning Active CN113781397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110918283.8A CN113781397B (en) 2021-08-11 2021-08-11 Medical image focus detection modeling method, device and system based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110918283.8A CN113781397B (en) 2021-08-11 2021-08-11 Medical image focus detection modeling method, device and system based on federal learning

Publications (2)

Publication Number Publication Date
CN113781397A CN113781397A (en) 2021-12-10
CN113781397B true CN113781397B (en) 2023-11-21

Family

ID=78837413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110918283.8A Active CN113781397B (en) 2021-08-11 2021-08-11 Medical image focus detection modeling method, device and system based on federal learning

Country Status (1)

Country Link
CN (1) CN113781397B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422605A (en) * 2022-01-12 2022-04-29 重庆邮电大学 Communication gradient self-adaptive compression method based on federal learning
CN114492849B (en) * 2022-01-24 2023-09-08 光大科技有限公司 Model updating method and device based on federal learning
CN115187783B (en) * 2022-09-09 2022-12-27 之江实验室 Multi-task hybrid supervision medical image segmentation method and system based on federal learning
CN115578369B (en) * 2022-10-28 2023-09-15 佐健(上海)生物医疗科技有限公司 Online cervical cell TCT slice detection method and system based on federal learning
CN115860116A (en) * 2022-12-02 2023-03-28 广州图灵科技有限公司 Federal learning method based on generative model and deep transfer learning
CN115830400B (en) * 2023-02-10 2023-05-16 南昌大学 Data identification method and system based on federal learning mechanism
CN116402812B (en) * 2023-06-07 2023-09-19 江西业力医疗器械有限公司 Medical image data processing method and system
CN116935136B (en) * 2023-08-02 2024-07-02 深圳大学 Federal learning method for processing classification problem of class imbalance medical image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674866A (en) * 2019-09-23 2020-01-10 兰州理工大学 Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
WO2021114618A1 (en) * 2020-05-14 2021-06-17 平安科技(深圳)有限公司 Federated learning method and apparatus, computer device, and readable storage medium
WO2021115480A1 (en) * 2020-06-30 2021-06-17 平安科技(深圳)有限公司 Federated learning method, device, equipment, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674866A (en) * 2019-09-23 2020-01-10 兰州理工大学 Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network
WO2021114618A1 (en) * 2020-05-14 2021-06-17 平安科技(深圳)有限公司 Federated learning method and apparatus, computer device, and readable storage medium
WO2021115480A1 (en) * 2020-06-30 2021-06-17 平安科技(深圳)有限公司 Federated learning method, device, equipment, and storage medium
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
联合多任务学习的人脸超分辨率重建;王欢;吴成东;迟剑宁;于晓升;胡倩;;中国图象图形学报(02);第19-30页 *

Also Published As

Publication number Publication date
CN113781397A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113781397B (en) Medical image focus detection modeling method, device and system based on federal learning
Chouhan et al. Soft computing approaches for image segmentation: a survey
CN109583342B (en) Human face living body detection method based on transfer learning
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN108230296B (en) Image feature recognition method and device, storage medium and electronic device
CN108182394B (en) Convolutional neural network training method, face recognition method and face recognition device
CN109346159B (en) Case image classification method, device, computer equipment and storage medium
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
US9514356B2 (en) Method and apparatus for generating facial feature verification model
US20190125298A1 (en) Echocardiographic image analysis
Raykar et al. Learning from crowds.
JP2022031730A (en) System and method for modeling probability distribution
CN106951825A (en) A kind of quality of human face image assessment system and implementation method
WO2021057186A1 (en) Neural network training method, data processing method, and related apparatuses
CN111242948B (en) Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
CN106407369A (en) Photo management method and system based on deep learning face recognition
KR102036957B1 (en) Safety classification method of the city image using deep learning-based data feature
CN109919252A (en) The method for generating classifier using a small number of mark images
CN109063643B (en) Facial expression pain degree identification method under condition of partial hiding of facial information
Kundu et al. Vision transformer based deep learning model for monkeypox detection
Lin et al. Adversarial learning with data selection for cross-domain histopathological breast cancer segmentation
CN116433970A (en) Thyroid nodule classification method, thyroid nodule classification system, intelligent terminal and storage medium
KR20210086374A (en) Method of classifying skin disease based on artificial intelligence
CN116958154A (en) Image segmentation method and device, storage medium and electronic equipment
Liu et al. A novel deep transfer learning method for sar and optical fusion imagery semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant