CN117333874A - Image segmentation method, system, storage medium and device - Google Patents

Image segmentation method, system, storage medium and device Download PDF

Info

Publication number
CN117333874A
CN117333874A CN202311405974.3A CN202311405974A CN117333874A CN 117333874 A CN117333874 A CN 117333874A CN 202311405974 A CN202311405974 A CN 202311405974A CN 117333874 A CN117333874 A CN 117333874A
Authority
CN
China
Prior art keywords
model
segmentation
teacher
student
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311405974.3A
Other languages
Chinese (zh)
Other versions
CN117333874B (en
Inventor
左严
杨萍萍
汤斌
王正荣
王祥伟
包寅杰
张世兰
刘振威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu New Hope Technology Co ltd
Original Assignee
Jiangsu New Hope Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu New Hope Technology Co ltd filed Critical Jiangsu New Hope Technology Co ltd
Priority to CN202311405974.3A priority Critical patent/CN117333874B/en
Publication of CN117333874A publication Critical patent/CN117333874A/en
Application granted granted Critical
Publication of CN117333874B publication Critical patent/CN117333874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image segmentation method, an image segmentation system, a storage medium and an image segmentation device. Mainly comprises the following steps: acquiring image data to be classified; processing the image data by adopting a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model, and the segmentation model is trained by the following method: and taking an exponential moving average of the student model parameters in the training process to obtain a more robust teacher model, and supervising the learning of the student model by taking a prediction result of the teacher model as a pseudo tag. In the image segmentation method, the student model parameters in the training process are exponentially and movably averaged, so that a more robust teacher model is obtained, and the prediction result of the teacher model is used as a pseudo tag to supervise the learning of the student model. The accuracy of the image segmentation method is improved.

Description

Image segmentation method, system, storage medium and device
Technical Field
The present invention relates to the field of image processing, and in particular, to an image segmentation method, system, storage medium, and apparatus.
Background
Image segmentation algorithms are currently in wide use in many fields. The image segmentation algorithm is mainly used for carrying out segmentation processing on the acquired image data so as to obtain a classification result of the image data. For example, the method is widely applied to the fields of automatic driving, medical image aided diagnosis, satellite remote sensing and the like. But the accuracy of conventional image segmentation algorithms needs to be further improved.
Disclosure of Invention
Based on this, an image segmentation method is provided. In the image segmentation method, the student model parameters in the training process are exponentially and movably averaged, so that a more robust teacher model is obtained, and the prediction result of the teacher model is used as a pseudo tag to supervise the learning of the student model. The accuracy of the image segmentation method is improved.
An image segmentation method, comprising:
acquiring image data to be classified;
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>The student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:wherein t represents the corresponding training step, +.>,/>For the corresponding smoothing coefficient +.>For adjusting->And->For sample X, the student model predicts that +.>The teacher model predicts the result as +.>,/>And->The method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useRepresentation of->And->For different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
the final loss function is:wherein y represents the real tag corresponding to sample x, < ->For dividing the loss function, +.>For consistency loss function, +.>For controllingAnd->Is a specific gravity of (c).
And training the segmentation model through a final loss function.
In one of the factsIn the embodiment, use is made ofTo measure pseudo tag->And the distance between the real tags y, whereby the weight is adaptively adjusted>
In one of the embodiments of the present invention,
wherein the method comprises the steps ofAnd->For a specific weight value, +.>,/>As a result of the corresponding threshold value,
is determined by the following wayIs the value of (1): during training, record +.The last k training steps>Is set as vector K, and takes the value in the nth percentile as +.>Value of->WhereinRepresenting hundredA function of the split bits.
In one of the embodiments of the present invention,
usingTo express +.>V pixels of (1), uncertainty of (2)>Obtained by the following formula:the threshold is selected as->Finally filter out +.>Is only supervised with the remaining parts,
the final consistency loss function is:
wherein the method comprises the steps ofAs a function of the corresponding indication,
determination was made using the following formulaIs the value of (1):
wherein u represents a pseudo tagCorresponding whole uncertainty map,>representing a flattening operation, ++>Representing the probability corresponding to the ith training round,
is determined by the following formulaIs of the size of (2):
representing the initial percentile.
In one of the embodiments of the present invention,
the teacher model and the student model are semantic segmentation models.
An image segmentation system, comprising:
the data acquisition unit is used for acquiring image data to be classified;
the data processing unit is used for processing the image data and specifically comprises the following steps:
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>The student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:
wherein t represents the corresponding training step,,/>for the corresponding smoothing coefficient +.>For adjustingh and->For sample X, the student model predicts that +.>The teacher model predicts the result as,/>And->The method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useRepresentation of->And->For different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
the final loss function is:
where y represents the real label to which sample x corresponds,for dividing the loss function, +.>For consistency loss function, +.>For controlling->And->Is a specific gravity of (c).
And training the segmentation model through a final loss function.
A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image segmentation method.
A computer apparatus, comprising: the image segmentation method comprises the steps of a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus, the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image segmentation method.
The beneficial effects of this application are:
the method takes an exponential moving average (Exponential Moving Average, EMA) of student model parameters in a training process to obtain a more robust teacher model, and uses a prediction result of the teacher model as a pseudo tag to supervise learning of the student model. The present application also contemplates that the training of the pseudo tag should be used more to monitor the model when excessive noise is present in the real tag, and dynamically assigns the weights of the tag monitor and the pseudo tag monitor by measuring the similarity between the pseudo tag and the real tag. The application also considers that the pseudo tag generated by the teacher model can also have noise at the pixel level, so the application lightens the influence of the noise in the pseudo tag on training based on uncertainty estimation of the pseudo tag. To further improve the accuracy of the image segmentation method of the present application.
In the field of medical image segmentation, particularly in the field of rib image segmentation, the application finds out through using a RibSeg data set and performing a contrast test, and the method can remarkably improve the Dice coefficient and the rib recovery value.
Drawings
Fig. 1 is a flow chart of an image segmentation algorithm according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings.
As shown in fig. 1, an embodiment of the present application provides an image segmentation method, where the image segmentation algorithm specifically includes:
acquiring image data to be classified;
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>Representation, student model usage indexThe moving average algorithm updates the parameters of the teacher model by specifically applying the following formula:
wherein t represents the corresponding training step,,/>for the corresponding smoothing coefficient +.>For adjustingAnd->Is a weight of (2).
For sample X from image data, the student model predicts it asThe teacher model predicts the result as +.>,/>And->The method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useThe representation, student model and teacher model may be commonly used semantic segmentation models. Wherein->And->For different Gaussian noise, under different random disturbance, the student model and the teacher model should have the same output for the same sample, and by using this, the output of the teacher model can be used as a pseudo tag to supervise the student model learning,
the final loss function is:wherein y represents the real label (the real label is manually marked) to which the sample x corresponds, is->To partition the loss function, it may be implemented as a Dice loss function, a BCE loss function, etc. />The prediction of the student model and the prediction of the teacher model are made as close as possible to each other for the consistency loss function, and may be a mean square error (Mean Squared Error, MSE) loss function. />For controllingAnd->Is a specific gravity of (c).
And training the segmentation model through a final loss function.
The image segmentation method can be applied to a plurality of fields, such as an automatic driving field, medical image auxiliary diagnosis, satellite remote sensing field and the like. Specifically, in the above method of the present application, the image data to be classified may be an environmental image acquired in the autopilot field, an image captured by a medical instrument acquired in the medical image auxiliary diagnosis field, an image acquired in the satellite remote sensing field, and the like.
On the basis of the above, further, when the noise of the real tag is too high, the pseudo tag should be used moreSupervised network learning, the present application adaptively adjusts tag supervision by evaluating similarity between pseudo tags and real tags>And pseudo tag supervision->And a weight therebetween. Prediction for teacher model->If the gap from the real tag y is too large, the tag may have noise, and at this time the pseudo tag should be used more for supervision, in particular +.>To measure pseudo tag->And the distance between the real tags y, whereby the weight is adaptively adjusted>
On the basis of the above, the method, in particular,can be obtained by the following method.
Wherein the method comprises the steps ofAnd->For a specific weight value, +.>,/>As the network is trained, the accuracy of teacher model prediction is higher and higher for the corresponding threshold value>The whole becomes smaller, thus +.>And should also vary.
Based on the above thought, it is determined by the following wayIs the value of (1): during training, record +.The last k training steps>Is set as vector K, and takes the value in the nth percentile as +.>Is used as a reference to the value of (a),
wherein the method comprises the steps ofRepresenting a percentile taking function.
On the basis of the above, further, in order to solve the problem that noise may also be contained in the pseudo tag and avoid the noise in the pseudo tag being fitted by the model, the method adopts an uncertainty estimation modePseudo tag generated by teacher modelThe uncertainty of each pixel probability distribution is estimated by entropy of the pixel probability distribution, so that high-quality pseudo tags are filtered out to more accurately supervise the student model.
In particular, use is made ofTo express +.>V pixels of (1), uncertainty of (2)>Obtained by the following formula:
the threshold is selected asFinally filter out +.>Is only supervised with the remaining parts,
the final consistency loss function is:wherein p is v Represents the v-th pixel in p,>for the corresponding indication function, it is obvious that +.>The value of (2) is related to the uncertainty of the current label overall and is also related to the training round of the model, and the prediction result of the teacher model is more reliable along with the training of the model, so that the uncertainty is gradually reduced. In a specific implementation, the following formula is used to determine +.>Is the value of (1):
wherein u represents a pseudo tagCorresponding whole uncertainty map,>representing a flattening operation, ++>Representing the probability corresponding to the ith training round,
is determined by the following formulaIs of the size of (2):
representing the initial percentile, total epoch is the total training round.
In one embodiment, the teacher model and the student model are both semantic segmentation models.
The application also provides an image segmentation system, comprising:
the data acquisition unit is used for acquiring image data to be classified;
the data processing unit is used for processing the image data and specifically comprises the following steps:
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>The student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:wherein t represents the corresponding training step, +.>,/>For the corresponding smoothing coefficient +.>For adjusting->And->Is used for the weight of the (c),
for sample X, the student model predicts it asThe teacher model predicts the result as +.>,/>Andthe method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useRepresentation of->And->For different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
the final loss function is:
where y represents the real label to which the sample x corresponds,for dividing the loss function, +.>For consistency loss function, +.>For controlling->And->Is a specific gravity of (c).
And training the segmentation model through a final loss function.
Embodiments of the present application also provide a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the image segmentation method.
Embodiments of the present application also provide a computer apparatus, comprising: the image segmentation method comprises the steps of a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus, the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image segmentation method.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. An image segmentation method, comprising:
acquiring image data to be classified;
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>The student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:
wherein t represents the corresponding training step, +.>,/>For the corresponding smoothing coefficient +.>For adjusting->And->For the weight of the sample
For sample x, the student model predicts it asThe teacher model predicts the result as +.>,/>And->The method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useRepresentation of->And->For different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
the final loss function is:wherein y represents the real tag corresponding to sample x, < ->For dividing the loss function, +.>For consistency loss function, +.>For controllingAnd->Is a specific gravity of (c). And training the segmentation model through a final loss function.
2. The image segmentation method as set forth in claim 1, characterized in thatTo measure false labelsAnd the distance between the real tags y, whereby the weight is adaptively adjusted>
3. The image segmentation method as set forth in claim 2, wherein,
wherein->And->For a specific weight value, +.>,/>As a result of the corresponding threshold value,
is determined by the following wayIs the value of (1): during training, record +.The last k training steps>Is set as vector K, and takes the value in the nth percentile as +.>Value of->WhereinRepresenting a percentile taking function.
4. The image segmentation method as set forth in claim 3, wherein,
usingTo express +.>V pixels of (1), uncertainty of (2)>Obtained by the following formula:the threshold is selected as->Finally filter out +.>Is only supervised with the remaining parts,
the final consistency loss function is:wherein->As a function of the corresponding indication,
determination was made using the following formulaIs the value of (1): />Wherein u represents a pseudo tag->Corresponding whole uncertainty map,>representing a flattening operation, ++>Representing the probability corresponding to the ith training round,
is determined by the following formulaIs of the size of (2): />,/>Representing the initial percentile.
5. The image segmentation method as set forth in claim 1, wherein the teacher model and the student model are both semantic segmentation models.
6. An image segmentation system, comprising:
the data acquisition unit is used for acquiring image data to be classified;
the data processing unit is used for processing the image data and specifically comprises the following steps:
processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
the segmentation model is trained by the following method:
parameters of the student modelRepresenting the parameters of the teacher model in +.>The student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:wherein t represents the corresponding training step, +.>,/>For the corresponding smoothing coefficient +.>For adjusting->And->For sample x, the student model predicts that +.>The teacher model predicts the result as +.>,/>And->The method can be obtained by the following formula:
wherein the student model and the teacher model adopt the same model structure and useRepresentation of->And->For different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
the final loss function is:wherein y represents the real tag corresponding to sample x, < ->For dividing the loss function, +.>For consistency loss function, +.>For controllingAnd->Is characterized by comprising a specific gravity of (2),
and training the segmentation model through a final loss function.
7. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image segmentation method according to any one of claims 1 to 5.
8. A computer apparatus, comprising: the image segmentation method according to any one of claims 1 to 5, wherein the processor, the memory, the communication interface and the communication bus are used for completing communication among each other through the communication bus, and the memory is used for storing at least one executable instruction which enables the processor to execute the operation corresponding to the image segmentation method according to any one of claims 1 to 5.
CN202311405974.3A 2023-10-27 2023-10-27 Image segmentation method, system, storage medium and device Active CN117333874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311405974.3A CN117333874B (en) 2023-10-27 2023-10-27 Image segmentation method, system, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311405974.3A CN117333874B (en) 2023-10-27 2023-10-27 Image segmentation method, system, storage medium and device

Publications (2)

Publication Number Publication Date
CN117333874A true CN117333874A (en) 2024-01-02
CN117333874B CN117333874B (en) 2024-07-30

Family

ID=89290247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311405974.3A Active CN117333874B (en) 2023-10-27 2023-10-27 Image segmentation method, system, storage medium and device

Country Status (1)

Country Link
CN (1) CN117333874B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256646A (en) * 2021-04-13 2021-08-13 浙江工业大学 Cerebrovascular image segmentation method based on semi-supervised learning
CN114120319A (en) * 2021-10-09 2022-03-01 苏州大学 Continuous image semantic segmentation method based on multi-level knowledge distillation
WO2022041307A1 (en) * 2020-08-31 2022-03-03 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
US20220207718A1 (en) * 2020-12-27 2022-06-30 Ping An Technology (Shenzhen) Co., Ltd. Knowledge distillation with adaptive asymmetric label sharpening for semi-supervised fracture detection in chest x-rays
CN115131565A (en) * 2022-07-20 2022-09-30 天津大学 Histology image segmentation model based on semi-supervised learning
CN115131366A (en) * 2021-11-25 2022-09-30 北京工商大学 Multi-mode small target image full-automatic segmentation method and system based on generation type confrontation network and semi-supervision field self-adaptation
CN115661459A (en) * 2022-11-02 2023-01-31 安徽大学 2D mean teacher model using difference information
CN115797637A (en) * 2022-12-29 2023-03-14 西北工业大学 Semi-supervised segmentation model based on uncertainty between models and in models
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception
CN116543162A (en) * 2023-05-09 2023-08-04 山东建筑大学 Image segmentation method and system based on feature difference and context awareness consistency

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022041307A1 (en) * 2020-08-31 2022-03-03 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
US20220207718A1 (en) * 2020-12-27 2022-06-30 Ping An Technology (Shenzhen) Co., Ltd. Knowledge distillation with adaptive asymmetric label sharpening for semi-supervised fracture detection in chest x-rays
CN113256646A (en) * 2021-04-13 2021-08-13 浙江工业大学 Cerebrovascular image segmentation method based on semi-supervised learning
CN114120319A (en) * 2021-10-09 2022-03-01 苏州大学 Continuous image semantic segmentation method based on multi-level knowledge distillation
CN115131366A (en) * 2021-11-25 2022-09-30 北京工商大学 Multi-mode small target image full-automatic segmentation method and system based on generation type confrontation network and semi-supervision field self-adaptation
CN115131565A (en) * 2022-07-20 2022-09-30 天津大学 Histology image segmentation model based on semi-supervised learning
CN115661459A (en) * 2022-11-02 2023-01-31 安徽大学 2D mean teacher model using difference information
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception
CN115797637A (en) * 2022-12-29 2023-03-14 西北工业大学 Semi-supervised segmentation model based on uncertainty between models and in models
CN116543162A (en) * 2023-05-09 2023-08-04 山东建筑大学 Image segmentation method and system based on feature difference and context awareness consistency

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HYUN-CHEOL PARK ET AL.: "Polyp segmentation with consistency training and continuous update of pseudo-label", 《SCIENTIFIC REPORTS》, 26 August 2022 (2022-08-26) *
姜威威;刘祥强;韩金仓;: "基于深度协同训练的肝脏CT图像自动分割方法", 电子设计工程, no. 14, 20 July 2020 (2020-07-20) *
文笃石;: "基于二次聚类弱监督学习的图像语义分割", 国外电子测量技术, no. 09, 15 September 2017 (2017-09-15) *

Also Published As

Publication number Publication date
CN117333874B (en) 2024-07-30

Similar Documents

Publication Publication Date Title
CN108229489B (en) Key point prediction method, network training method, image processing method, device and electronic equipment
CN111860573B (en) Model training method, image category detection method and device and electronic equipment
Deng et al. Unsupervised segmentation of synthetic aperture radar sea ice imagery using a novel Markov random field model
CN109086811B (en) Multi-label image classification method and device and electronic equipment
CN113221903B (en) Cross-domain self-adaptive semantic segmentation method and system
CN108229675B (en) Neural network training method, object detection method, device and electronic equipment
CN108229522B (en) Neural network training method, attribute detection device and electronic equipment
JP2011003207A (en) Adaptive discriminative generative model and incremental fisher discriminant analysis and application to visual tracking
US12061991B2 (en) Transfer learning with machine learning systems
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN116385879A (en) Semi-supervised sea surface target detection method, system, equipment and storage medium
CN113435587A (en) Time-series-based task quantity prediction method and device, electronic equipment and medium
EP4170561A1 (en) Method and device for improving performance of data processing model, storage medium and electronic device
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN115797735A (en) Target detection method, device, equipment and storage medium
CN114255381B (en) Training method of image recognition model, image recognition method, device and medium
CN116109812A (en) Target detection method based on non-maximum suppression threshold optimization
CN109242882B (en) Visual tracking method, device, medium and equipment
CN111583321A (en) Image processing apparatus, method and medium
CN113822144A (en) Target detection method and device, computer equipment and storage medium
CN113313179A (en) Noise image classification method based on l2p norm robust least square method
CN117333874B (en) Image segmentation method, system, storage medium and device
CN114898145B (en) Method and device for mining implicit new class instance and electronic equipment
CN114820496A (en) Human motion detection and identification method and system used for magnetic resonance imaging examination
CN113837220A (en) Robot target identification method, system and equipment based on online continuous learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant