WO2021017006A1 - 图像处理方法及装置、神经网络及训练方法、存储介质 - Google Patents
图像处理方法及装置、神经网络及训练方法、存储介质 Download PDFInfo
- Publication number
- WO2021017006A1 WO2021017006A1 PCT/CN2019/098928 CN2019098928W WO2021017006A1 WO 2021017006 A1 WO2021017006 A1 WO 2021017006A1 CN 2019098928 W CN2019098928 W CN 2019098928W WO 2021017006 A1 WO2021017006 A1 WO 2021017006A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- sub
- decoding
- encoding
- output
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 145
- 238000000034 method Methods 0.000 title claims abstract description 126
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 117
- 238000003672 processing method Methods 0.000 title claims abstract description 67
- 238000003860 storage Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 159
- 230000011218 segmentation Effects 0.000 claims abstract description 157
- 238000005070 sampling Methods 0.000 claims description 166
- 230000006870 function Effects 0.000 claims description 158
- 230000008569 process Effects 0.000 claims description 76
- 230000004927 fusion Effects 0.000 claims description 25
- 230000004913 activation Effects 0.000 claims description 19
- 238000010606 normalization Methods 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 9
- 239000010410 layer Substances 0.000 description 164
- 101150020516 SLN1 gene Proteins 0.000 description 34
- 101000595193 Homo sapiens Podocin Proteins 0.000 description 24
- 102100036037 Podocin Human genes 0.000 description 24
- 238000003709 image segmentation Methods 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 239000008186 active pharmaceutical agent Substances 0.000 description 8
- 238000002591 computed tomography Methods 0.000 description 8
- 210000004072 lung Anatomy 0.000 description 8
- 238000003745 diagnosis Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 5
- 206010056342 Pulmonary mass Diseases 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 208000010412 Glaucoma Diseases 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000002604 ultrasonography Methods 0.000 description 3
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 238000005481 NMR spectroscopy Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 201000005202 lung cancer Diseases 0.000 description 2
- 208000020816 lung neoplasm Diseases 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000009206 nuclear medicine Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000001931 thermography Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 210000000496 pancreas Anatomy 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 210000000278 spinal cord Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the embodiments of the present disclosure relate to an image processing method, an image processing device, a neural network, a neural network training method, and a storage medium.
- CNN Convolutional Neural Network
- At least one embodiment of the present disclosure provides an image processing method, including: acquiring an input image; and processing the input image using a neural network to obtain a first segmented image and a second segmented image; wherein the neural network includes Two encoding and decoding networks, the two encoding and decoding networks include a first encoding and decoding network and a second encoding and decoding network, the input of the first encoding and decoding network includes the input image; the neural network is used for the input Processing the image to obtain a first segmented image and a second segmented image includes: performing segmentation processing on the input image using the first encoding and decoding network to obtain a first output feature map and the first segmented image; Combine the first output feature map with at least one of the input image and the first segmented image to obtain the input of the second codec network; use the second codec network to perform The input of the second encoding and decoding network is segmented to obtain the second segmented image.
- the neural network includes Two encoding and
- each of the two encoding and decoding networks includes: an encoding element network and a decoding element network;
- the segmentation processing of the first encoding and decoding network includes : Use the code element network of the first encoding and decoding network to encode the input image to obtain a first encoding feature map; use the decoding element network of the first encoding and decoding network to encode the first encoding feature map Performing decoding processing to obtain the output of the first codec network, the output of the first codec network includes the first segmented image;
- the segmentation processing of the second codec network includes: using the second The encoding element network of the encoding and decoding network performs encoding processing on the input of the second encoding and decoding network to obtain a second encoding feature map; using the decoding element network of the second encoding and decoding network to perform the second encoding feature map Decoding processing to obtain the output of the second encoding and decoding network, and the
- the coding element network includes N coding sub-networks and N-1 down-sampling layers, and the N coding sub-networks are connected in sequence, and each down-sampling layer It is used to connect two adjacent coding sub-networks, where N is an integer and N ⁇ 2;
- the coding processing of the coding element network includes: using the i-th coding sub-network in the N coding sub-networks to perform the The input of the i coding sub-network is processed to obtain the output of the i-th coding sub-network; the i+1-th coding sub-network connecting the i-th coding sub-network and the N coding sub-networks is used
- the down-sampling layer of the network performs down-sampling processing on the output of the i-th encoding sub-network to obtain the down-sampled output of the i-th encoding sub-network; using the i+1-th en
- the decoding element network when N>2, includes N-1 decoding sub-networks and N-1 upsampling layers, and the N-1 Two decoding sub-networks are connected in sequence, the N-1 upsampling layers include a first upsampling layer and N-2 second upsampling layers, and the first upsampling layer is used to connect the N-1 decoding The first decoding sub-network in the sub-network and the N-th coding sub-network in the N coding sub-networks, each second upsampling layer is used to connect two adjacent decoding sub-networks; the decoding element
- the decoding processing of the network includes: obtaining the input of the j-th decoding sub-network among the N-1 decoding sub-networks; using the j-th decoding sub-network to process the input of the j-th decoding sub-network, To obtain the output of the j-th decoding sub-network; where j is an integer and 1 ⁇
- the size of the upsampling input of the j-th decoding sub-network is the same as the size of the output of the Nj-th encoding sub-network, where 1 ⁇ j ⁇ N-1.
- the encoding element network further includes a second encoding sub-network
- the decoding element network includes the first decoding sub-network
- the decoding processing of the decoding element network includes: connecting the first decoding sub-network and the second The first up-sampling layer of the two encoding sub-networks performs up-sampling processing on the output of the second encoding sub-network to obtain the up-sampling input of the first decoding sub-network; decode the first The upsampling input of the sub-network is combined with the output of the first encoding sub-network as the input of the first decoding sub-network, wherein the size of the up-sampling input of the first decoding sub-network is the same as that of the The output size of the first encoding sub-
- each of the N coding sub-networks and the N-1 decoding sub-networks includes: a first convolution module and a residual module; each The processing of each sub-network includes: using the first convolution module to process the input of the sub-network corresponding to the first convolution module to obtain a first intermediate output; using the residual module to process the first intermediate output; The intermediate output is subjected to residual processing to obtain the output of the sub-network.
- the residual module includes a plurality of second convolution modules; the residual module is used to perform residual processing on the first intermediate output to obtain the
- the output of the sub-network includes: processing the first intermediate output using the plurality of second convolution modules to obtain a second intermediate output; and combining the first intermediate output and the second intermediate output Perform residual connection and addition processing to obtain the output of the sub-network.
- the processing of each of the first convolution module and the plurality of second convolution modules includes: convolution processing, activation processing, and batch normalization processing .
- the input and output sizes of each decoding sub-network in the decoding element network are the same, and the input and output sizes of each encoding sub-network in the encoding element network are the same.
- the output size is the same.
- each encoding and decoding network further includes a fusion module; the fusion module in the first encoding and decoding network is used to process the first output feature map to Obtaining the first segmented image; using the second encoding and decoding network to perform segmentation processing on the input of the second encoding and decoding network to obtain the second segmented image includes: using the second encoding and decoding network to The input of the second encoding and decoding network is segmented to obtain a second output feature map; the fusion module in the second encoding and decoding network is used to process the second output feature map to obtain the second segmentation image.
- the first segmented image corresponds to a first area of the input image
- the second segmented image corresponds to a second area of the input image
- the input The first area of the image surrounds the second area of the input image.
- At least one embodiment of the present disclosure further provides a neural network training method, including: obtaining training input images; using the training input images to train the neural network to be trained to obtain the image processing method provided by any embodiment of the present disclosure The neural network in.
- using the training input image to train the neural network to be trained includes: using the neural network to be trained to process the training input image to obtain the first A training segmentation image and a second training segmentation image; the first reference segmentation image and the second reference segmentation image based on the training input image, as well as the first training segmentation image and the second training segmentation image, are lost through the system
- the function calculates the system loss value of the neural network to be trained; and corrects the parameters of the neural network to be trained based on the system loss value; wherein, the first training segmentation image and the first reference segmentation Image correspondence, and the second training segmentation image corresponds to the second reference segmentation image.
- the system loss function includes a first segmentation loss function and a second segmentation loss function; each of the first segmentation loss function and the second segmentation loss function A segmentation loss function includes: cross loss function and similarity loss function.
- the first segmentation loss function is expressed as:
- L 01 represents the first segmentation loss function
- L 11 represents the cross loss function in the first segmentation loss function
- ⁇ 11 represents the weight of the cross loss function in the first segmentation loss function
- L 21 represents the similarity loss function in the first segmentation loss function
- ⁇ 12 represents the weight of the similarity loss function in the first segmentation loss function
- the cross loss function L 11 in the first segmentation loss function is expressed as:
- the similarity loss function L 21 in the first segmentation loss function is expressed as:
- x m1n1 represents the value of the pixel located in the m1 row and n1 column in the first training segmented image
- y m1n1 represents the value of the pixel located in the m1 row and n1 column in the first reference segmented image
- the second segmentation loss function is expressed as:
- L 02 represents the second segmentation loss function
- L 12 represents the cross loss function in the second segmentation loss function
- ⁇ 21 represents the weight of the cross loss function in the second segmentation loss function
- L 22 represents the similarity loss function in the second segmentation loss function
- ⁇ 22 represents the weight of the similarity loss function in the second segmentation loss function
- the cross loss function L 12 in the second split loss function is expressed as:
- the similarity loss function L 22 in the second segmentation loss function is expressed as:
- x m2n2 represents the value of the pixel located in the m2 row and n2 column in the second training segmented image
- y m2n2 represents the value of the pixel located in the m2 row and n2 column in the second reference segmented image
- the system loss function is expressed as:
- L 01 and L 02 represent the first segmentation loss function and the second segmentation loss function, respectively
- ⁇ 01 and ⁇ 02 represent the first segmentation loss function and the first segmentation loss function in the system loss function, respectively.
- the weight of the two-division loss function is the weight of the two-division loss function.
- obtaining the training input image includes: obtaining an original training input image; and performing preprocessing and data enhancement processing on the original training input image to obtain the Training input image.
- At least one embodiment of the present disclosure further provides an image processing device, including: a memory for storing non-transitory computer readable instructions; and a processor for running the computer readable instructions, and the computer readable instructions are The processor executes the image processing method provided by any embodiment of the present disclosure or executes the training method provided by any embodiment of the present disclosure when the processor is running.
- At least one embodiment of the present disclosure further provides a storage medium for non-transitory storage of computer-readable instructions.
- the image processing method provided by any embodiment of the present disclosure can be executed
- the instruction or the instruction of the training method provided by any embodiment of the present disclosure can be executed.
- At least one embodiment of the present disclosure also provides a neural network, including: two encoding and decoding networks and a joint layer, the two encoding and decoding networks include a first encoding and decoding network and a second encoding and decoding network; wherein, the first The coding network is configured to perform segmentation processing on the input image to obtain a first output feature map and a first segmented image; the joint layer is configured to combine the first output feature map with the input image and the first At least one of the divided images is combined to obtain the input of the second codec network; the second codec network is configured to perform division processing on the input of the second codec network to obtain the second Split the image.
- each of the two encoding and decoding networks includes an encoding element network and a decoding element network; the encoding element network of the first encoding and decoding network is configured In order to perform encoding processing on the input image to obtain a first encoding feature map; the decoding element network of the first encoding and decoding network is configured to perform decoding processing on the first encoding feature map to obtain the first encoding feature map; The output of the encoding and decoding network, the output of the first encoding and decoding network includes the first segmented image; the code element network of the second encoding and decoding network is configured to encode the input of the second encoding and decoding network , To obtain a second encoding feature map; the decoding element network of the second encoding and decoding network is configured to decode the second encoding feature map to obtain the output of the second encoding and decoding network, the first The output of the second codec network includes the second segment
- the coding element network includes N coding sub-networks and N-1 down-sampling layers.
- the N coding sub-networks are connected in sequence, and each down-sampling layer uses When connecting two adjacent coding sub-networks, N is an integer and N ⁇ 2; the i-th coding sub-network of the N coding sub-networks is configured to process the input of the i-th coding sub-network , To obtain the output of the i-th coding sub-network; the down-sampling layer connecting the i-th coding sub-network and the i+1-th coding sub-network of the N coding sub-networks is configured to The output of the i-th encoding sub-network is subjected to down-sampling processing to obtain the down-sampled output of the i-th encoding sub-network; the i+1-th encoding sub-network is configured to perform the down-sampling
- the down-sampling output of the network is processed to obtain the output of the i+1th coding sub-network; where i is an integer and 1 ⁇ i ⁇ N-1, the first code in the N coding sub-networks
- the input of the sub-network includes the input of the first encoding and decoding network or the second encoding and decoding network.
- the input of the i+1-th encoding sub-network includes the The down-sampled output of the i-th coding sub-network, and the first coding feature map or the second coding feature map includes the output of the N coding sub-networks.
- the decoding element network includes N-1 decoding sub-networks, N-1 up-sampling layers, and N-1
- the decoding sub-networks are connected in sequence
- the N-1 upsampling layers include a first upsampling layer and N-2 second upsampling layers
- the first upsampling layer is used to connect the N-1 decoding sub-networks
- the first decoding sub-network in the network and the N-th encoding sub-network in the N encoding sub-networks, each second upsampling layer is used to connect two adjacent decoding sub-networks;
- each encoding and decoding network It also includes N-1 sub-joint layers corresponding to the N-1 decoding sub-networks of the decoding element network;
- the j-th decoding sub-network in the N-1 decoding sub-networks is configured to The input of the decoding sub-network is processed to obtain the output of the j-th decoding sub-network
- the size of the upsampling input of the j-th decoding sub-network is the same as the size of the output of the Nj-th encoding sub-network, where 1 ⁇ j ⁇ N -1.
- the coding element network further includes a second coding sub-network
- the decoding element network includes a first decoding sub-network and a connection
- the first upsampling layer of the first decoding sub-network and the second encoding sub-network, each encoding and decoding network further includes a first sub-network corresponding to the first decoding sub-network of the decoding element network Joint layer;
- the first up-sampling layer connecting the first decoding sub-network and the second encoding sub-network is configured to perform up-sampling processing on the output of the second encoding sub-network to obtain
- the first sub-joining layer is configured to combine the up-sampling input of the first decoding sub-network with the output of the first coding sub-network , As the input of the first decoding sub-network, wherein the size of the up
- each of the N coding sub-networks and the N-1 decoding sub-networks includes: a first convolution module and a residual module;
- the first convolution module is configured to process the input of the sub-network corresponding to the first convolution module to obtain a first intermediate output;
- the residual module is configured to perform residual on the first intermediate output Difference processing to obtain the output of the sub-network.
- the residual module includes a plurality of second convolution modules and a residual addition layer; the plurality of second convolution modules are configured to An intermediate output is processed to obtain a second intermediate output; the residual addition layer is configured to perform residual connection and addition processing on the first intermediate output and the second intermediate output to obtain the sub The output of the network.
- each of the first convolution module and the plurality of second convolution modules includes: a convolution layer, an activation layer, and a batch normalization layer;
- the convolution layer is configured to perform convolution processing
- the activation layer is configured to perform activation processing
- the batch normalization layer is configured to perform batch normalization processing.
- the size of the input and output of each decoding sub-network in the decoding element network is the same, and the input and output of each encoding sub-network in the encoding element network The dimensions are the same.
- each codec network further includes a fusion module; the fusion module in the first codec network is configured to process the first output feature map to Obtain the first segmented image; the second codec network is configured to perform segmentation processing on the input of the second codec network to obtain the second segmented image, including: the second codec network Is configured to perform segmentation processing on the input of the second encoding and decoding network to obtain a second output feature map; the fusion module in the second encoding and decoding network is configured to process the second output feature map, To obtain the second segmented image.
- FIG. 1 is a flowchart of an image processing method provided by some embodiments of the present disclosure
- FIG. 2 is a schematic structural block diagram of a neural network corresponding to the image processing method shown in FIG. 1 provided by some embodiments of the present disclosure
- FIG. 3 is a schematic structural block diagram of another neural network corresponding to the image processing method shown in FIG. 1 provided by some embodiments of the present disclosure
- FIG. 4 is an exemplary flowchart corresponding to step S200 in the image processing method shown in FIG. 1 according to some embodiments of the present disclosure
- FIG. 5 is a schematic diagram of a first area and a second area in an input image provided by some embodiments of the present disclosure
- Fig. 6 is a flowchart of a neural network training method provided by some embodiments of the present disclosure.
- FIG. 7 is an exemplary flowchart corresponding to step S400 in the training method shown in FIG. 6 according to some embodiments of the present disclosure
- FIG. 8 is a schematic block diagram of an image processing apparatus provided by an embodiment of the present disclosure.
- FIG. 9 is a schematic diagram of a storage medium provided by an embodiment of the disclosure.
- Image segmentation is a research hotspot in the field of image processing.
- Image segmentation is a technique that divides an image into several specific regions with unique properties and extracts objects of interest.
- Medical image segmentation is an important application field of image segmentation. Medical image segmentation refers to extracting the region or boundary of the tissue of interest from the medical image, so that the extracted tissue can be clearly distinguished from other tissues. Medical image segmentation is of great significance to the quantitative analysis of tissues, the formulation of surgical plans and computer-aided diagnosis.
- deep learning neural networks can be used for medical image segmentation, which can improve the accuracy of image segmentation, reduce the time to extract features, and improve computational efficiency. Medical image segmentation can be used to extract regions of interest to facilitate the analysis and recognition of medical images.
- the convolutional layer, down-sampling layer, and up-sampling layer each refer to the corresponding processing operation, that is, convolution processing, down-sampling processing, up-sampling processing, etc., as described
- the modules, subnets, etc. also refer to the corresponding processing operations, and the description will not be repeated below.
- At least one embodiment of the present disclosure provides an image processing method.
- the image processing method includes: acquiring an input image; and using a neural network to process the input image to obtain a first segmented image and a second segmented image.
- the neural network includes two encoding and decoding networks.
- the two encoding and decoding networks include a first encoding and decoding network and a second encoding and decoding network.
- the input of the first encoding and decoding network includes an input image.
- Using a neural network to process the input image to obtain the first segmented image and the second segmented image includes: using the first codec network to segment the input image to obtain the first output feature map and the first segmented image; The first output feature map is combined with at least one of the input image and the first segmented image to obtain the input of the second encoding and decoding network; the second encoding and decoding network is used to segment the input of the second encoding and decoding network to obtain the first Two split image.
- Some embodiments of the present disclosure also provide image processing devices, neural networks, neural network training methods, and storage media corresponding to the above-mentioned image processing methods.
- the image processing method provided by the embodiments of the present disclosure obtains the first segmented image first, and then obtains the second segmented image based on the first segmented image, which can improve robustness, has higher generalization and accuracy, and is resistant to different light sources.
- the images acquired by the environment and imaging equipment have more stable segmentation results; at the same time, the end-to-end convolutional neural network model can be used to reduce manual operations.
- Fig. 1 is a flowchart of an image processing method provided by some embodiments of the present disclosure.
- the image processing method includes step S100 and step S200.
- Step S100 Obtain an input image
- Step S200 Use a neural network to process the input image to obtain a first segmented image and a second segmented image.
- the input image may be various types of images, for example, including but not limited to medical images.
- medical images may include ultrasound images, X-ray computed tomography (CT), MRI (Magnetic Resonance Imaging, MRI) images, and Digital Subtraction Angiography (DSA) And Positron Emission Computed Tomography PET, etc.
- CT computed tomography
- MRI Magnetic Resonance Imaging
- DSA Digital Subtraction Angiography
- medical images may include brain tissue MRI images, spinal cord MRI images, fundus images, blood vessel images, pancreas CT images, lung CT images, and so on.
- the input image can be acquired by an image acquisition device.
- the image acquisition device may include, for example, ultrasound equipment, X-ray equipment, nuclear magnetic resonance equipment, nuclear medicine equipment, medical optical equipment, and thermal imaging equipment, which are not limited in the embodiments of the present disclosure.
- the input image can also be a person image, an image of animals and plants or a landscape image, etc.
- the input image can also be through the camera of a smart phone, a camera of a tablet computer, a camera of a personal computer, a lens of a digital camera, a surveillance camera or a network Image acquisition devices such as cameras.
- the input image can be a grayscale image or a color image.
- the size of the input image can be set according to implementation needs, which is not limited in the embodiment of the present disclosure.
- the input image may be an original image directly collected by an image collecting device, or an image obtained after preprocessing the original image.
- the image processing method provided by the embodiment of the present disclosure may further include an operation of preprocessing the input image. Preprocessing can eliminate irrelevant information or noise information in the input image, so as to better segment the input image.
- a neural network is used to segment the input image, that is, the shape of an object (for example, an organ or tissue) is segmented from the input image to obtain a corresponding segmented image.
- the first segmented image may correspond to the first region of the input image, for example, the first segmented image Corresponding to an organ or tissue in the medical image (for example, the optic disc in the fundus image, the lung in the lung CT image, etc.); the second segmented image may correspond to the second area of the input image, for example, the first area of the input image
- the second region surrounding the input image, for example, the second segmented image corresponds to a structure or lesion in the aforementioned organ or tissue (for example, the optic cup in the fundus image, the lung nodule in the lung CT image, etc.).
- the first segmented image and the second segmented image can be used for medical diagnosis, for example, can be used for glaucoma (based on segmentation of optic disc and cup), early lung cancer (based on segmentation of lung and lung nodules), etc. diagnosis.
- FIG. 2 is a schematic structural block diagram of a neural network corresponding to the image processing method shown in FIG. 1 provided by some embodiments of the present disclosure
- FIG. 3 is another one provided by some embodiments of the present disclosure corresponding to the one shown in FIG.
- the schematic architecture block diagram of the neural network in the image processing method shown in FIG. 4 is an exemplary flowchart corresponding to step S200 in the image processing method shown in FIG. 1 provided by some embodiments of the present disclosure.
- step S200 in the image processing method shown in FIG. 1 will be described in detail with reference to FIGS. 2, 3 and 4.
- the neural network in the image processing method provided by the embodiments of the present disclosure may include two encoding and decoding networks, and the two encoding and decoding networks include a first encoding and decoding network UN1 and a second encoding and decoding network.
- Encoding and decoding network UN2 may be U-nets, which are not limited in the embodiment of the present disclosure.
- the input of the first codec network UN1 includes an input image.
- a neural network is used to process the input image to obtain the first segmented image and the second segmented image, that is, step S200 includes step S210 to step S230.
- Step S210 Use the first codec network to perform segmentation processing on the input image to obtain a first output feature map and a first segmented image.
- the first encoding and decoding network UN1 includes an encoding element network LN1 and a decoding element network RN1.
- the segmentation processing of the first encoding and decoding network UN1 includes: using the code element network LN1 of the first encoding and decoding network UN1 to encode the input image (that is, the input of the first encoding and decoding network) to obtain the first encoding feature map F1: Use the decoding element network RN1 of the first encoding and decoding network UN1 to decode the first encoding feature map F1 to obtain the output of the first encoding and decoding network UN1.
- the output of the first encoding and decoding network UN1 includes the first segmented image; for example, as shown in Figures 2 and 3, the output of the first encoding and decoding network UN1 may also include the first output.
- Feature map F01, the first output feature map F01 can be used in the processing of the second encoding and decoding network UN2.
- the coding element network LN1 may include N coding sub-networks SLN1 and N-1 down-sampling layers DS, where N is an integer and N ⁇ 2.
- the N coding sub-networks SLN1 are connected in turn, and each down-sampling layer DS is used to connect two adjacent coding sub-networks SLN1, that is, any two adjacent coding sub-networks SLN1 pass a corresponding down-sampling Layer DS connection.
- the code element network LN1 of the first codec network UN1 includes the first A coding sub-network, a second coding sub-network, a third coding sub-network and a fourth coding sub-network; as shown in Figure 3, in the coding element network LN1 of the first coding and decoding network UN1, from top to bottom , The coding element network LN1 includes the first coding sub-network and the second coding sub-network in sequence.
- the down-sampling layer is used for down-sampling processing.
- the down-sampling layer can be used to reduce the scale of the input image, simplify the calculation complexity, and reduce over-fitting to a certain extent; on the other hand, the down-sampling layer can also perform feature compression to extract the input image Main features.
- the down-sampling layer can reduce the size of feature images, but does not change the number of feature images. For example, downsampling is used to reduce the size of the feature image, thereby reducing the data volume of the feature map.
- the down-sampling layer can use max pooling, average pooling, strided convolution, decimation, such as selecting fixed pixels, and demultiplexing output (demuxout, Split the input image into multiple smaller images) and other down-sampling methods to achieve down-sampling processing.
- the coding process of the coding element network LN1 includes: using the i-th coding sub-network in the N coding sub-networks SLN1 to process the input of the i-th coding sub-network to obtain the The output of the i coding sub-network; the down-sampling layer DS connecting the i-th coding sub-network and the i+1-th coding sub-network in the N coding sub-networks SLN1 is used to down-sample the output of the i-th coding sub-network Process to obtain the down-sampled output of the i-th coding sub-network; use the i+1-th coding sub-network to process the down-sampled output of the i-th coding sub-network to obtain the output of the i+1-th coding sub-network ; Where i is an integer and 1 ⁇ i ⁇ N-1, the input of the first encoding subnetwork in the
- each coding sub-network SLN1 the input and output sizes of each coding sub-network SLN1 are the same.
- the decoding element network RN1 includes N-1 decoding sub-networks SRN1 and N-1 upsampling layers.
- the decoding element network RN1 of the first encoding and decoding network UN1 from bottom to top, the decoding element network RN1 includes the first decoding sub-network, the second decoding sub-network and the third decoding sub-network in turn
- the encoding element network RN1 in the decoding element network RN1 of the first encoding and decoding network UN1, the encoding element network RN1 includes the first decoding sub-network.
- the up-sampling layer is used for up-sampling processing.
- the up-sampling process is used to increase the size of the feature image, thereby increasing the data volume of the feature map.
- the up-sampling layer can adopt up-sampling methods such as strided transposed convolution and interpolation algorithms to implement up-sampling processing.
- the interpolation algorithm may include, for example, interpolation, bilinear interpolation, and bicubic interpolation (Bicubic Interprolation).
- N-1 decoding sub-networks SRN1 are connected in sequence
- N-1 up-sampling layers include the first up-sampling layer US1 and N-2 second up-sampling layers.
- Layer US2 the first up-sampling layer US1 is used to connect the first decoding sub-network in N-1 decoding sub-networks SRN1 and the N-th coding sub-network in N coding sub-networks SLN1, each second up-sampling
- the layer US2 is used to connect two adjacent decoding sub-networks, that is, any two adjacent decoding sub-networks SRN1 are connected through a corresponding second upsampling layer US2.
- the decoding process of the decoding element network RN1 includes: obtaining the input of the jth decoding sub-network in the N-1 decoding sub-networks SRN1; using the j-th decoding The sub-network processes the input of the j-th decoding sub-network to obtain the output of the j-th decoding sub-network; where j is an integer and 1 ⁇ j ⁇ N-1, and the output of the first encoding and decoding network UN1 includes N- The output of the N-1th decoding subnetwork in 1 decoding subnetwork SRN1.
- the N-1 decoding subnetwork in the N-1 decoding subnetworks SRN1 (the third decoding subnetwork in the example shown in Figure 2) The output of is the first output feature map F01.
- obtaining the input of the j-th decoding sub-network in the N-1 decoding sub-networks includes: connecting the j-th decoding sub-network and the j-th decoding sub-network in the N-1 decoding sub-networks SRN1
- the second upsampling layer US2 of the j-1 decoding sub-network performs up-sampling processing on the output of the j-1 decoding sub-network to obtain the up-sampling input of the j-th decoding sub-network;
- the j-th decoding sub-network is The up-sampling input of is combined with the output of the Nj-th encoding sub-network in the N encoding sub-networks SRN1, as the input of the j-th decoding sub-network.
- the size of the up-sampling input of the j-th decoding sub-network is the same as the size of the output of the N-j-th coding sub-network in the N coding sub-networks SLN1, where 1 ⁇ j ⁇ N-1.
- the j-th decoder For example, taking the up-sampling input of the j-th decoding sub-network and the output of the Nj-th coding sub-network in the N coding sub-networks SLN1 including a matrix with H rows and W columns as an example, the j-th decoder
- the number of feature maps included in the up-sampling input of the network is C1
- the number of feature maps included in the output of the Nj-th encoding sub-network in the N encoding sub-network SLN1 is C2
- the feature map models of the output of the Nj-th coding sub-network in the and N coding sub-networks SLN1 are (C1, H, W) and (C2, H, W), respectively.
- the up-sampled input of the j-th decoding sub-network is combined with the output of the Nj-th coding sub-network in the N coding sub-networks SRN1, and the feature map model of the input of the j-th decoding sub-network is obtained as (C1 +C2, H, W).
- the number of feature maps included in the input of the jth decoding sub-network is C1+C2, and the present disclosure does not limit the arrangement order of the feature maps in the input feature map model of the jth decoding sub-network. It should be noted that the embodiments of the present disclosure include but are not limited to this.
- connection may mean that two functional objects (for example, sub-network, down-sampling layer, up-sampling layer, etc.) are connected in the direction of signal (for example, feature map) transmission.
- the output of one functional object in front of it is used as the input of another functional object in the back.
- the coding element network LN1 includes a first coding sub-network, a second coding sub-network, and a connection between the first coding sub-network and the second coding sub-network.
- the down-sampling layer DS, the decoding element network RN1 includes a first decoding sub-network, and a first up-sampling layer US1 connecting the first decoding sub-network and the second encoding sub-network. Therefore, as shown in FIG.
- the decoding process of the decoding element network RN1 includes: using the first up-sampling layer US1 connecting the first decoding sub-network and the second encoding sub-network to pair the second The output of the encoding sub-network is up-sampled to obtain the up-sampling input of the first decoding sub-network; the up-sampling input of the first decoding sub-network is combined with the output of the first encoding sub-network as the first The input of a decoding sub-network, where the size of the up-sampled input of the first decoding sub-network is the same as the output of the first encoding sub-network; the first decoding sub-network is used to perform the input of the first decoding sub-network Processing to obtain the output of the first decoding sub-network; wherein, the output of the first encoding and decoding network UN1 includes the output of the first decoding sub-network.
- the output of the first encoding and decoding network UN1 includes the
- the number of downsampling layers in the coding element network LN1 is equal to the number of upsampling layers in the decoding element network RN1.
- the first down-sampling layer in the encoding element network LN1 and the last-to-last up-sampling layer in the decoding element network RN1 are located at the same level, and the second down-sampling layer and the decoding element in the encoding element network LN1
- the penultimate upsampling layer in the network RN1 is located at the same level, ...
- the last downsampling layer in the coding element network LN1 and the first upsampling layer in the decoding element network RN1 are located at the same level.
- the downsampling layer used to connect the first encoding sub-network and the second encoding sub-network is connected to the upper sampling layer used to connect the second decoding sub-network and the third decoding sub-network.
- the sampling layer is at the same level, and the downsampling layer used to connect the second encoding sub-network and the third encoding sub-network is at the same level as the upsampling layer used to connect the first decoding sub-network and the second decoding sub-network ,
- the down-sampling layer used to connect the third coding sub-network and the fourth coding sub-network is at the same level as the up-sampling layer used to connect the first decoding sub-network and the fourth coding sub-network.
- the down-sampling factor of the down-sampling layer for example, correspondingly, the down-sampling factor of 2 ⁇ 2
- the up-sampling factor of the up-sampling layer for example, correspondingly, 2 ⁇ 2 upsampling factor
- the down-sampling factor of the downsampling layer is 1/y
- the upsampling factor of the upsampling layer is y, where y is a positive integer, and y is usually greater than Equal to 2.
- the size of the upsampling input of the j-th decoding sub-network can be made the same as the output size of the Nj-th coding sub-network in the N coding sub-networks SLN1, where N is an integer and N ⁇ 2, and j is an integer And 1 ⁇ j ⁇ N-1.
- each of the N coding sub-networks SLN1 in the coding element network LN1 and the N-1 decoding sub-networks SRN1 in the decoding element network RN1 may include a first convolution module CN1 and Residual error module RES.
- the processing of each sub-network includes: using the first convolution module CN1 to process the input of the sub-network corresponding to the first convolution module CN1 to obtain the first intermediate output; using the residual The difference module RES performs residual processing on the first intermediate output to obtain the output of the sub-network.
- the residual module RES may include a plurality of second convolution modules CN2, for example, the number of second convolution modules CN2 included in each residual module RES may be 2, but the present disclosure does not Limited to this.
- using the residual module RES to perform residual processing on the first intermediate output to obtain the output of the sub-network includes: using multiple second convolution modules CN2 to perform the residual processing on the first intermediate output Perform processing to obtain the second intermediate output; and perform residual connection and addition processing on the first intermediate output and the second intermediate output (as shown by ADD in the figure) to obtain the output of the residual module RES, that is, the sub The output of the network.
- the output of each coding sub-network is the first coding feature map F1.
- the size of the first intermediate output is the same as the size of the second intermediate output, so that after the residual connection is added, the size of the output of the residual module RES (that is, the output of the corresponding sub-network) is the same as that of the residual module RES.
- the input that is, the corresponding first intermediate output
- each of the aforementioned first convolution module CN1 and second convolution module CN2 may include a convolution layer, an activation layer, and a batch normalization layer (Batch Normalization Layer), so that each The processing of a convolution module can include: convolution processing, activation processing and batch normalization processing.
- the convolutional layer is the core layer of the convolutional neural network.
- the convolutional layer can apply several convolution kernels (also called filters) to its input (for example, input image) to extract multiple types of features of the input.
- the convolutional layer may include a 3 ⁇ 3 convolution kernel.
- the convolutional layer can include multiple convolution kernels, and each convolution kernel can extract one type of feature.
- the convolution kernel is generally initialized in the form of a random decimal matrix. During the training process of the convolutional neural network, the convolution kernel will learn to obtain reasonable weights.
- the result obtained after applying a convolution kernel to the input image is called a feature map, and the number of feature maps is equal to the number of convolution kernels.
- Each feature map is composed of some neurons arranged in a rectangle.
- the neurons of the same feature map share weights, and the shared weights here are the convolution kernels.
- the feature image output by the convolutional layer of one level can be input to the convolutional layer of the next adjacent level and processed again to obtain a new feature image.
- the activation layer includes an activation function, and the activation function is used to introduce nonlinear factors to the convolutional neural network, so that the convolutional neural network can better solve more complex problems.
- the activation function may include a linear correction unit (ReLU) function, a sigmoid function (Sigmoid function), or a hyperbolic tangent function (tanh function).
- the ReLU function is an unsaturated nonlinear function, and the Sigmoid function and tanh function are saturated nonlinear functions.
- the activation layer can be used as a layer of the convolutional neural network alone, or the activation layer can also be included in the convolutional layer.
- the batch normalization layer is used to perform batch normalization processing on the feature image, so that the gray value of the pixel of the feature image changes within a predetermined range, thereby reducing the difficulty of calculation and improving the contrast.
- the predetermined range may be [-1, 1].
- the processing method of the batch standardization layer can refer to the common batch standardization process, which will not be repeated here.
- the input and output sizes of the first convolution module CN1 are the same, so that the input and output sizes of each encoding sub-network in the encoding element network LN1 are the same, and each of the decoding element network RN1 has the same size.
- the input and output sizes of the decoding sub-network are the same.
- the first codec network UN1 may also include a fusion module MG.
- the fusion module MG in the first codec network UN1 is used to process the first output feature map F01 to obtain the first segmented image.
- the fusion module MG in the first encoding and decoding network UN1 may use a 1 ⁇ 1 convolution kernel to process the first output feature map F01 to obtain the first segmented image; it should be noted that the Examples include but are not limited to this.
- Step S220 Combine the first output feature map with at least one of the input image and the first segmented image to obtain the input of the second codec network.
- the size of the first output feature map F01 is the same as the size of the input image.
- the sampling input and the output of the Nj-th coding sub-network in the N coding sub-networks SRN1 are jointly described, which will not be repeated here.
- Step S230 Use the second codec network to perform segmentation processing on the input of the second codec network to obtain a second segmented image.
- the second encoding and decoding network UN2 includes an encoding element network LN2 and a decoding element network RN2.
- the segmentation processing of the second encoding and decoding network UN2 includes: using the code element network LN2 of the second encoding and decoding network UN2 to encode the input of the second encoding and decoding network to obtain the second encoding feature map F2; using the second encoding feature map F2; The decoding element network RN2 of the encoding and decoding network UN2 performs decoding processing on the second encoding feature map F2 to obtain the output of the second encoding and decoding network UN2.
- the second coding feature map F2 includes the output of the N coding sub-networks SLN1 in the coding element network LN2.
- the output of the second encoding and decoding network UN2 may include the second segmented image.
- the structure and processing of the code element network LN2 and the decoding element network RN2 of the second encoding and decoding network UN2 can respectively refer to the aforementioned encoding element networks LN1 and LN1 and the decoding element network of the first encoding and decoding network UN1.
- the description of the structure and processing procedure of the decoding element network RN1 will not be repeated here.
- Figures 2 and 3 both show that the second codec network UN2 and the first codec network UN1 have the same structure (that is, the same number of coding sub-networks and the same number of decoding sub-networks are included)
- the embodiments of the present disclosure are not limited to this. That is to say, the second encoding and decoding network UN2 may also have a similar structure to the first encoding and decoding network UN1, but the number of encoding sub-networks included in the second encoding and decoding network UN2 is equal to that of the first encoding and decoding network UN1. The quantity can be different.
- the second codec network UN2 may also include a fusion module MG.
- using the second codec network UN2 to perform segmentation processing on the input of the second codec network UN2 to obtain a second segmented image including: using the second codec network UN2 to segment the input of the second codec network UN2 , To obtain the second output feature map F02; use the fusion module MG in the second encoding and decoding network UN2 to process the second output feature map F02 to obtain the second segmented image.
- the fusion module MG in the second encoding and decoding network UN2 is used to process the second output feature map F02 to obtain the second segmented image.
- the fusion module MG in the second encoding and decoding network UN2 may use a 1 ⁇ 1 convolution kernel to process the second output feature map F02 to obtain the second segmented image; it should be noted that the Examples include but are not limited to this.
- the first divided image corresponds to a first area of the input image
- the second divided image corresponds to a second area of the input image.
- FIG. 5 is a schematic diagram of a first area and a second area in an input image provided by some embodiments of the present disclosure.
- the first region R1 of the input image surrounds the second region R2 of the input image, that is, the second region R2 is located in the first region R1.
- the first segmented image and the second segmented image can be used for medical diagnosis, etc., for example, can be used for glaucoma (based on the segmentation of the optic disc and the optic cup, where the first area corresponds to the optic disc and the second area corresponds to Screening and diagnosis of early lung cancer (based on the segmentation of lungs and lung nodules, where the first area corresponds to the lung and the second area corresponds to the lung nodules), etc.
- the area ratio of the optic cup/optic disc ie, the cup-to-disk ratio
- the screen can be screened according to the relative size of the area ratio and the preset threshold. Check and diagnose, not repeat them here. It should be noted that the embodiments of the present disclosure include but are not limited to this.
- first region R1 and the second region R2 in the input image shown in FIG. 5 are illustrative, and the embodiment of the present disclosure does not limit this.
- first area in the input image may include a connected area (as shown in FIG. 5)
- second area in the input image may include a connected area (as shown in FIG. 5).
- the first area in the input image can also include multiple separate first sub-areas
- the second area in the input image can include a connected area (located in a first Within one sub-region), it may also include multiple separate second sub-regions (located in one first sub-region or in several separate first sub-regions).
- the second area is located in the first area, which may include the case where the edge of the second area does not overlap with the edge of the first area, or at least part of the edge of the second area and the edge of the first area. In the case of overlap, the embodiment of the present disclosure does not limit this.
- the same or similar functional objects may have the same or similar structure or processing process, but the parameters corresponding to the same or similar functional objects may be the same. It can also be different. The embodiment of the present disclosure does not limit this.
- the robustness can be improved, and the generalization and accuracy can be improved.
- the images acquired by the light environment and imaging equipment have more stable segmentation results; at the same time, the end-to-end convolutional neural network model can be used to reduce manual operations.
- At least one embodiment of the present disclosure also provides a neural network, which can be used to execute the image processing method provided in the foregoing embodiment.
- the structure of the neural network can refer to the structure of the neural network shown in FIG. 2 or FIG. 3.
- the neural network provided by the embodiment of the present disclosure includes two encoding and decoding networks, and the two encoding and decoding networks include a first encoding and decoding network UN1 and a second encoding and decoding network UN2; the neural network also It includes a joint layer (as shown in CONCAT for connecting the first codec network UN1 and the second codec network UN2 in Fig. 2 and Fig. 3).
- CONCAT joint layer
- both the first codec network UN1 and the second codec network UN2 may be U-nets, which are not limited in the embodiment of the present disclosure.
- the input of the first codec network UN1 includes an input image.
- the neural network is configured to process the input image to obtain the first segmented image and the second segmented image.
- the first encoding network UN1 is configured to perform segmentation processing on the input image to obtain the first output feature map F01 and the first segmented image.
- the first encoding and decoding network UN1 includes an encoding element network LN1 and a decoding element network RN1.
- the encoding element network LN1 of the first encoding and decoding network UN1 is configured to perform encoding processing on the input image (that is, the input of the first encoding and decoding network) to obtain the first encoding feature map F1;
- the decoding element network of the first encoding and decoding network UN1 RN1 is configured to perform decoding processing on the first encoding feature map F1 to obtain the output of the first encoding and decoding network UN1.
- the output of the first encoding and decoding network UN1 includes the first segmented image; for example, as shown in Figures 2 and 3, the output of the first encoding and decoding network UN1 may also include the first output.
- Feature map F01, the first output feature map F01 can be used in the processing of the second encoding and decoding network UN2.
- the coding element network LN1 may include N coding sub-networks SLN1 and N-1 down-sampling layers DS, where N is an integer and N ⁇ 2.
- the N coding sub-networks SLN1 are connected in turn, and each down-sampling layer DS is used to connect two adjacent coding sub-networks SLN1, that is, any two adjacent coding sub-networks SLN1 pass a corresponding down-sampling Layer DS connection.
- the code element network LN1 of the first codec network UN1 includes the first A coding sub-network, a second coding sub-network, a third coding sub-network and a fourth coding sub-network; as shown in Figure 3, in the coding element network LN1 of the first coding and decoding network UN1, from top to bottom , The coding element network LN1 includes the first coding sub-network and the second coding sub-network in sequence.
- the i-th coding sub-network of the N coding sub-networks SLN1 is configured to process the input of the i-th coding sub-network to obtain the output of the i-th coding sub-network ;
- the down-sampling layer DS connecting the i-th coding sub-network and the i+1-th coding sub-network in the N coding sub-networks SLN1 is configured to down-sample the output of the i-th coding sub-network to obtain the The down-sampled output of the i coding sub-network;
- the i+1-th coding sub-network is configured to process the down-sampled output of the i-th coding sub-network to obtain the output of the i+1-th coding sub-network;
- i is an integer and 1 ⁇ i ⁇ N-1
- the input of the first coding subnetwork in the N coding subnetworks SLN1 includes the input of
- each coding sub-network SLN1 the input and output sizes of each coding sub-network SLN1 are the same.
- the decoding element network RN1 includes N-1 decoding sub-networks SRN1 and N-1 upsampling layers.
- the decoding element network RN1 of the first encoding and decoding network UN1 from bottom to top, the decoding element network RN1 includes the first decoding sub-network, the second decoding sub-network and the third decoding sub-network in turn
- the encoding element network RN1 in the decoding element network RN1 of the first encoding and decoding network UN1, the encoding element network RN1 includes the first decoding sub-network.
- N-1 decoding sub-networks SRN1 are connected in sequence
- N-1 up-sampling layers include the first up-sampling layer US1 and N-2 second up-sampling layers.
- Layer US2 the first up-sampling layer US1 is used to connect the first decoding sub-network in N-1 decoding sub-networks SRN1 and the N-th coding sub-network in N coding sub-networks SLN1, each second up-sampling
- the layer US2 is used to connect two adjacent decoding sub-networks, that is, any two adjacent decoding sub-networks SRN1 are connected through a corresponding second upsampling layer US2.
- the first encoding and decoding network UN1 also includes N-1 sub-joint layers corresponding to the N-1 decoding sub-networks SRN1 of the decoding element network RN1 (such as the CONCAT in the decoding element network RN1 in Figure 2). Shown).
- the j-th decoding sub-network in the N-1 decoding sub-networks SRN1 is configured to process the input of the j-th decoding sub-network to obtain the j-th decoding sub-network.
- the output of the network, where j is an integer and 1 ⁇ j ⁇ N-1, and the output of the first encoding and decoding network UN1 includes the output of the N-1th decoding sub-network of the N-1 decoding sub-networks SRN1.
- the N-1 decoding subnetwork in the N-1 decoding subnetworks SRN1 (the third decoding subnetwork in the example shown in Figure 2)
- the output of is the first output feature map F01.
- the first up-sampling layer US1 is configured to up-sampling the output of the N-th encoding sub-network to obtain the up-sampling input of the first decoding sub-network; connect N-1 decoding
- the j-th decoding sub-network and the second up-sampling layer US2 of the j-1-th decoding sub-network in the sub-network SRN1 are configured to perform up-sampling processing on the output of the j-1-th decoding sub-network to obtain the The up-sampled input of the j-th decoding sub-network, where j is an integer and 1 ⁇ j ⁇ N-1.
- the j-th sub-joint layer of the N-1 sub-joint layers is configured to combine the up-sampled input of the j-th decoding sub-network with the Nj-th coding sub-network in the N coding sub-networks LN1
- the output of is combined as the input of the j-th decoding sub-network, where j is an integer and 1 ⁇ j ⁇ N-1.
- the size of the up-sampling input of the j-th decoding sub-network is the same as the size of the output of the N-j-th coding sub-network in the N coding sub-networks SLN1, where 1 ⁇ j ⁇ N-1.
- the coding element network LN1 includes a first coding sub-network, a second coding sub-network, and a connection between the first coding sub-network and the second coding sub-network.
- the down-sampling layer DS, the decoding element network RN1 includes a first decoding sub-network, and a first up-sampling layer US1 connecting the first decoding sub-network and the second encoding sub-network.
- the first encoding and decoding network UN1 also includes the first sub-joint layer corresponding to the first decoding sub-network SRN1 of the decoding element network RN1 (as shown by the CONCAT in the decoding element network RN1 in Figure 3). Show).
- the first up-sampling layer US1 connecting the first decoding sub-network and the second encoding sub-network is configured to perform processing on the output of the second encoding sub-network.
- the first sub-joining layer is configured to combine the up-sampling input of the first decoding sub-network with the output of the first encoding sub-network as The input of the first decoding sub-network, where the size of the up-sampled input of the first decoding sub-network is the same as the output of the first encoding sub-network;
- the first decoding sub-network is configured to The input of the network is processed to obtain the output of the first decoding sub-network; wherein, the output of the first encoding and decoding network UN1 includes the output of the first decoding sub-network.
- the output of the first encoding and decoding network UN1 includes the output of the first de
- the number of downsampling layers in the coding element network LN1 is equal to the number of upsampling layers in the decoding element network RN1.
- the first down-sampling layer in the encoding element network LN1 and the last-to-last up-sampling layer in the decoding element network RN1 are located at the same level, and the second down-sampling layer and the decoding element in the encoding element network LN1
- the penultimate upsampling layer in the network RN1 is located at the same level, ...
- the last downsampling layer in the coding element network LN1 and the first upsampling layer in the decoding element network RN1 are located at the same level.
- the downsampling layer used to connect the first encoding sub-network and the second encoding sub-network is connected to the upper sampling layer used to connect the second decoding sub-network and the third decoding sub-network.
- the sampling layer is at the same level, and the downsampling layer used to connect the second encoding sub-network and the third encoding sub-network is at the same level as the upsampling layer used to connect the first decoding sub-network and the second decoding sub-network ,
- the down-sampling layer used to connect the third coding sub-network and the fourth coding sub-network is at the same level as the up-sampling layer used to connect the first decoding sub-network and the fourth coding sub-network.
- the down-sampling factor of the down-sampling layer for example, correspondingly, the down-sampling factor of 2 ⁇ 2
- the up-sampling factor of the up-sampling layer for example, correspondingly, 2 ⁇ 2 upsampling factor
- the down-sampling factor of the downsampling layer is 1/y
- the upsampling factor of the upsampling layer is y, where y is a positive integer, and y is usually greater than Equal to 2.
- the size of the upsampling input of the j-th decoding sub-network can be made the same as the output size of the Nj-th coding sub-network in the N coding sub-networks SLN1, where N is an integer and N ⁇ 2, and j is an integer And 1 ⁇ j ⁇ N-1.
- each of the N coding sub-networks SLN1 in the coding element network LN1 and the N-1 decoding sub-networks SRN1 in the decoding element network RN1 may include a first convolution module CN1 and Residual error module RES.
- the first convolution module CN1 is configured to process the input of the sub-network corresponding to the first convolution module CN1 to obtain the first intermediate output;
- the residual module RES is configured to Perform residual processing on the first intermediate output to obtain the output of the sub-network.
- the residual module RES may include multiple second convolution modules CN2 and a residual addition layer (as shown by ADD in FIGS. 2 and 3), for example, each residual module RES
- the number of included second convolution modules CN2 may be 2, but the present disclosure is not limited thereto.
- the plurality of second convolution modules CN2 are configured to process the first intermediate output to obtain the second intermediate output; the residual addition layer is configured to transfer the first intermediate output Perform residual connection and addition processing with the second intermediate output to obtain the output of the residual module RES, that is, the output of the sub-network.
- the output of each coding sub-network is the first coding feature map F1.
- the size of the first intermediate output is the same as the size of the second intermediate output.
- the size of the output of the residual module RES (that is, the output of the corresponding sub-network) is the same as that of the residual module RES.
- the input ie, the corresponding first intermediate output
- each of the foregoing first convolution module CN1 and second convolution module CN2 may include a convolution layer, an activation layer, and a batch normalization layer (Batch Normalization Layer).
- the convolutional layer is configured to perform convolution processing
- the activation layer is configured to perform activation processing
- the batch normalization layer is configured to perform batch normalization processing.
- the input and output sizes of the first convolution module CN1 are the same, so that the input and output sizes of each encoding sub-network in the encoding element network LN1 are the same, and each of the decoding element network RN1 has the same size.
- the input and output sizes of the decoding sub-network are the same.
- the first codec network UN1 may also include a fusion module MG.
- the fusion module MG in the first codec network UN1 is configured to process the first output feature map F01 to obtain the first segmented image.
- the fusion module MG in the first encoding and decoding network UN1 may use a 1 ⁇ 1 convolution kernel to process the first output feature map F01 to obtain the first segmented image; it should be noted that the Examples include but are not limited to this.
- the joint layer is configured to combine the first output feature map F01 with at least one of the input image and the first segmented image to obtain the input of the second codec network.
- the size of the first output feature map F01 is the same as the size of the input image.
- the second encoding network UN2 is configured to perform segmentation processing on the input of the second encoding and decoding network to obtain a second segmented image.
- the second encoding and decoding network UN2 includes an encoding element network LN2 and a decoding element network RN2.
- the encoding element network LN2 of the second encoding and decoding network UN2 is configured to perform encoding processing on the input of the second encoding and decoding network to obtain the second encoding feature map F2; the decoding element network RN2 of the second encoding and decoding network UN2 is configured to The second encoding feature map F2 is decoded to obtain the output of the second encoding and decoding network UN2.
- the second coding feature map F2 includes the output of the N coding sub-networks SLN1 in the coding element network LN2.
- the output of the second encoding and decoding network UN2 may include the second segmented image.
- the structure and function of the code element network LN2 and the decoding element network RN2 of the second encoding and decoding network UN2 can be referred to the aforementioned code element network LN1 and decoding of the first encoding and decoding network UN1, respectively.
- the related description of the structure and function of the meta-network RN1 will not be repeated here.
- Figures 2 and 3 both show that the second codec network UN2 and the first codec network UN1 have the same structure (that is, the same number of coding sub-networks and the same number of decoding sub-networks are included)
- the embodiments of the present disclosure are not limited to this. That is to say, the second encoding and decoding network UN2 may also have a similar structure to the first encoding and decoding network UN1, but the number of encoding sub-networks included in the second encoding and decoding network UN2 is equal to that of the first encoding and decoding network UN1. The quantity can be different.
- the second codec network UN2 may also include a fusion module MG.
- the second codec network UN2 is configured to perform segmentation processing on the input of the second codec network UN2 to obtain a second segmented image, including: the second codec network UN2 is configured to The input is segmented to obtain the second output feature map F02; the fusion module MG in the second encoding and decoding network UN2 is configured to process the second output feature map F02 to obtain the second segmented image.
- the fusion module MG in the second encoding and decoding network UN2 may use a 1 ⁇ 1 convolution kernel to process the second output feature map F02 to obtain the second segmented image; it should be noted that the Examples include but are not limited to this.
- FIG. 6 is a flowchart of a neural network training method provided by some embodiments of the present disclosure.
- the training method includes step S300 and step S400.
- Step S300 Obtain training input images.
- the training input image may also be various types of images, including, but not limited to, medical images, for example.
- the training input image can be acquired by an image acquisition device.
- the image acquisition device may include, for example, ultrasound equipment, X-ray equipment, nuclear magnetic resonance equipment, nuclear medicine equipment, medical optical equipment, and thermal imaging equipment, which are not limited in the embodiments of the present disclosure.
- training input images can also be images of people, plants and animals, or landscape images, etc.
- the training input images can also be through the camera of a smartphone, a camera of a tablet computer, a camera of a personal computer, a lens of a digital camera, a surveillance camera, or a webcam Wait for the image acquisition device to acquire.
- the training input image may also be a sample image in a sample set prepared in advance.
- the sample set also includes a standard segmentation map (ie, ground truth) of the sample image.
- the training input image can be a grayscale image or a color image.
- obtaining the training input image may include: obtaining the original training input image; and performing preprocessing and data enhancement processing on the original training input image to obtain the training input image.
- the original training input image is generally an image directly collected by an image collection device.
- the original training input image can be preprocessed and data augmented.
- preprocessing can eliminate irrelevant information or noise information in the original training input image, so as to better segment the training input image.
- the preprocessing may include, for example, image scaling on the original training input image. Image scaling includes scaling and cropping the original training input image to a preset size to facilitate subsequent image segmentation processing.
- preprocessing can also include gamma correction, image de-redundancy (cutting out redundant parts of the image), image enhancement (image adaptive color equalization, image alignment, color correction, etc.) or For processing such as noise reduction and filtering, for example, you can refer to common processing methods, which will not be repeated here.
- Image enhancement processing includes expanding the training input image data through methods such as random cropping, rotation, flipping, skew, affine transformation, etc., increasing the difference of training input images, reducing overfitting in the image processing process, and increasing convolution The robustness and generalization of the neural network model.
- Step S400 Use the training input image to train the neural network to be trained to obtain the neural network in the image processing method provided in any embodiment of the present disclosure.
- the structure of the neural network to be trained may be the same as the neural network shown in FIG. 2 or the neural network shown in FIG. 3, and the embodiments of the present disclosure include but are not limited to this.
- the neural network to be trained can execute the image processing method provided by any of the above embodiments of the present disclosure after being trained by the training method, that is, the neural network obtained by using the training method can execute the image provided by any of the above embodiments of the present disclosure. Approach.
- FIG. 7 is an exemplary flowchart corresponding to step S400 in the training method shown in FIG. 6 provided by some embodiments of the present disclosure.
- the neural network to be trained is trained using the training input image, that is, step S400 includes step S410 to step S430.
- Step S410 Use the neural network to be trained to process the training input image to obtain a first training segmentation image and a second training segmentation image.
- step S410 can refer to the related description of the aforementioned step S200, where the neural network to be trained, the training input image, the first training segmentation image, and the second training segmentation image in step S410 correspond to those in step S200.
- the details of neural network, input image, first segmented image and second segmented image are not repeated here.
- the initial parameters of the neural network to be trained may be random numbers, for example, the random numbers conform to a Gaussian distribution. It should be noted that the embodiments of the present disclosure do not limit this.
- Step S420 Based on the first reference segmentation image and the second reference segmentation image of the training input image, and the first training segmentation image and the second training segmentation image, calculate the system loss value of the neural network to be trained through the system loss function, where, The first training segmentation image corresponds to the first reference segmentation image, and the second training segmentation image corresponds to the second reference segmentation image.
- the training input image is a sample image in a sample set prepared in advance.
- the first reference segmentation image and the second reference segmentation image are respectively the first standard segmentation corresponding to the sample image included in the sample set.
- the first training segmentation image corresponds to the first reference segmentation image, which means that the first training segmentation image and the first reference segmentation image correspond to the same area (for example, the first area) of the training input image; the second training segmentation image corresponds to the first reference segmentation image.
- Correspondence between two reference segmented images means that the second training segmented image and the second reference segmented image correspond to the same region (for example, the second region) of the training input image.
- the first area of the training input image surrounds the second area of the training input image, that is, the second area of the training input image is located within the first area of the training input image.
- the system loss function may include a first segmentation loss function and a second segmentation loss function.
- the system loss function can be expressed as:
- L 01 and L 02 respectively represent the first segmentation loss function and the second segmentation loss function
- ⁇ 01 and ⁇ 02 respectively represent the weights of the first segmentation loss function and the second segmentation loss function in the system loss function.
- the first segmentation loss function may include a binary (cross-entropy) loss function and a similarity (softdice) loss function.
- the first segmentation loss function can be expressed as:
- L 01 represents the first segmentation loss function
- L 11 represents the cross loss function in the first segmentation loss function
- ⁇ 11 represents the weight of the cross loss function in the first segmentation loss function
- L 21 represents the first segmentation loss function.
- ⁇ 12 represents the weight of the similarity loss function in the first segmentation loss function.
- the cross loss function L 11 in the first segmentation loss function can be expressed as:
- the similarity loss function L 21 in the first segmentation loss function can be expressed as:
- x m1n1 represents the value of the pixel located in the m1 row and n1 column in the first training segmented image
- y m1n1 represents the value of the pixel located in the m1 row and n1 column in the first reference segmented image
- the training goal is to minimize the system loss value. Therefore, in the training process of the neural network to be trained, minimizing the system loss value includes minimizing the first segmentation loss function value.
- the second segmentation loss function may also include a binary (cross-entropy) loss function and a similarity (softdice) loss function.
- the second segmentation loss function can be expressed as:
- L 02 represents the second segmentation loss function
- L 12 represents the cross loss function in the second segmentation loss function
- ⁇ 21 represents the weight of the cross loss function in the second segmentation loss function
- L 22 represents the second segmentation loss
- ⁇ 22 represents the weight of the similarity loss function in the second segmentation loss function.
- the cross loss function L 12 in the second split loss function can be expressed as:
- the similarity loss function L 22 in the second segmentation loss function can be expressed as:
- x m2n2 represents the value of the pixel located in the m2 row and n2 column in the second training segmented image
- y m2n2 represents the value of the pixel located in the m2 row and n2 column in the second reference segmented image
- minimizing the system loss value also includes minimizing the second segmentation loss function value.
- Step S430 Correct the parameters of the neural network to be trained based on the system loss value.
- the training process of the neural network to be trained can also include an optimization function.
- the optimization function can calculate the error value of the parameters of the neural network to be trained according to the system loss value calculated by the system loss function, and according to the error value The parameters of the neural network are corrected.
- the optimization function may use a stochastic gradient descent (SGD) algorithm, a batch gradient descent (BGD) algorithm, etc., to calculate the error value of the parameters of the neural network to be trained.
- SGD stochastic gradient descent
- BGD batch gradient descent
- the above-mentioned training method may further include: judging whether the training of the neural network to be trained meets a predetermined condition, if the predetermined condition is not met, repeating the above-mentioned training process (ie, step S410 to step S430); if the predetermined condition is met, stop In the above training process, a trained neural network is obtained.
- the foregoing predetermined condition is that the system loss value corresponding to two consecutive (or more) training input images no longer significantly decreases.
- the foregoing predetermined condition is that the number of training times or training periods of the neural network to be trained reaches a predetermined number. The embodiment of the present disclosure does not limit this.
- the first training segmentation image and the second training segmentation image output by the trained neural network can be respectively close to the first reference segmentation image and the second reference segmentation image, that is, the trained neural network can perform standard training on the input image. Image segmentation.
- the program/method of can be implemented by corresponding software, firmware, hardware, etc.; and the above-mentioned embodiments are only illustrative of the training process of the neural network to be trained.
- Those skilled in the art should know that in the training phase, a large number of sample images need to be used to train the neural network; at the same time, each sample image training process can include multiple iterations to perform the parameters of the neural network to be trained. Fix.
- the training phase also includes fine-tune the parameters of the neural network to be trained to obtain more optimized parameters.
- the neural network training method provided by the embodiment of the present disclosure can train the neural network used in the image processing method of the embodiment of the present disclosure.
- the neural network trained by the training method can obtain the first segmented image first, and then Obtaining the second segmented image based on the first segmented image can improve robustness, have higher generalization and accuracy, and have more stable segmentation results for images acquired by different light environments and imaging devices; at the same time, end-to-end
- the convolutional neural network model at the end can reduce manual operations.
- FIG. 8 is a schematic block diagram of an image processing device provided by an embodiment of the present disclosure.
- the image processing apparatus 500 includes a memory 510 and a processor 520.
- the memory 510 is used for non-transitory storage of computer readable instructions
- the processor 520 is used for running the computer readable instructions.
- the image processing method provided by any embodiment of the present disclosure is executed. Or/and neural network training method.
- the memory 510 and the processor 520 may directly or indirectly communicate with each other.
- components such as the memory 510 and the processor 520 may communicate through a network connection.
- the network may include a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
- the network may include a local area network, the Internet, a telecommunication network, the Internet of Things (Internet of Things) based on the Internet and/or a telecommunication network, and/or any combination of the above networks, etc.
- the wired network may, for example, use twisted pair, coaxial cable, or optical fiber transmission for communication, and the wireless network may use, for example, a 3G/4G/5G mobile communication network, Bluetooth, Zigbee, or WiFi.
- the present disclosure does not limit the types and functions of the network here.
- the processor 520 may control other components in the image processing apparatus to perform desired functions.
- the processor 520 may be a central processing unit (CPU), a tensor processor (TPU), or a graphics processor GPU, and other devices with data processing capabilities and/or program execution capabilities.
- the central processing unit (CPU) can be an X86 or ARM architecture.
- the GPU can be directly integrated on the motherboard alone or built into the north bridge chip of the motherboard.
- the GPU can also be built into the central processing unit (CPU).
- the memory 510 may include any combination of one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- Volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
- the non-volatile memory may include, for example, read only memory (ROM), hard disk, erasable programmable read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, flash memory, etc.
- one or more computer instructions may be stored in the memory 510, and the processor 520 may execute the computer instructions to implement various functions.
- the computer-readable storage medium may also store various application programs and various data, such as training input images, first reference segmented images, second reference segmented images, and various data used and/or generated by the application programs.
- one or more steps in the image processing method described above may be executed.
- one or more steps in the neural network training method described above may be executed.
- the image processing device provided by the embodiments of the present disclosure is exemplary rather than restrictive. According to actual application requirements, the image processing device may also include other conventional components or structures, for example, to achieve image processing. For the necessary functions of the device, those skilled in the art can set other conventional components or structures according to specific application scenarios, which are not limited in the embodiments of the present disclosure.
- FIG. 9 is a schematic diagram of a storage medium provided by an embodiment of the disclosure.
- the storage medium 600 non-transitory stores computer-readable instructions 601.
- any of the embodiments of the present disclosure can be executed.
- the instruction of the image processing method or the instruction of the neural network training method provided by any embodiment of the present disclosure can be executed.
- one or more computer instructions may be stored on the storage medium 600.
- Some computer instructions stored on the storage medium 600 may be, for example, instructions for implementing one or more steps in the above-mentioned image processing method.
- the other computer instructions stored on the storage medium may be, for example, instructions for implementing one or more steps in the above-mentioned neural network training method.
- the storage medium may include the storage components of a tablet computer, the hard disk of a personal computer, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), optical disk read only memory (CD -ROM), flash memory, or any combination of the above storage media, can also be other suitable storage media.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- CD -ROM optical disk read only memory
- flash memory or any combination of the above storage media, can also be other suitable storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (31)
- 一种图像处理方法,包括:获取输入图像;以及使用神经网络对所述输入图像进行处理,以得到第一分割图像和第二分割图像;其中,所述神经网络包括两个编码解码网络,所述两个编码解码网络包括第一编码解码网络和第二编码解码网络,所述第一编码解码网络的输入包括所述输入图像;使用所述神经网络对所述输入图像进行处理,以得到第一分割图像和第二分割图像,包括:使用所述第一编码解码网络对所述输入图像进行分割处理,以得到第一输出特征图和所述第一分割图像;将所述第一输出特征图与所述输入图像和所述第一分割图像至少之一进行联合,以得到所述第二编码解码网络的输入;使用所述第二编码解码网络对所述第二编码解码网络的输入进行分割处理,以得到所述第二分割图像。
- 根据权利要求1所述的图像处理方法,其中,所述两个编码解码网络中的每个编码解码网络包括:编码元网络和解码元网络;所述第一编码解码网络的分割处理包括:使用所述第一编码解码网络的编码元网络对所述输入图像进行编码处理,以得到第一编码特征图;使用所述第一编码解码网络的解码元网络对所述第一编码特征图进行解码处理,以得到所述第一编码解码网络的输出,所述第一编码解码网络的输出包括所述第一分割图像;所述第二编码解码网络的分割处理包括:使用所述第二编码解码网络的编码元网络对所述第二编码解码网络的输入进行编码处理,以得到第二编码特征图;使用所述第二编码解码网络的解码元网络对所述第二编码特征图进行解码处理,以得到所述第二编码解码网络的输出,所述第二编码解码网络的输出包括所述第二分割图像。
- 根据权利要求2所述的图像处理方法,其中,所述编码元网络包括N个编码子网络和N-1个下采样层,所述N个编码子网络依次连接,每个下采样层用于连接相邻的两个编码子网络,N为整数且N≥2;所述编码元网络的编码处理包括:使用所述N个编码子网络中的第i个编码子网络对所述第i个编码子网络的输入进行处理,以得到所述第i个编码子网络的输出;使用连接所述第i个编码子网络和所述N个编码子网络中的第i+1个编码子网络的下采样层对所述第i个编码子网络的输出进行下采样处理,以得到所述第i个编码子网络的下采样输出;使用所述第i+1个编码子网络对所述第i个编码子网络的下采样输出进行处理,以得到所述第i+1个编码子网络的输出;其中,i为整数且1≤i≤N-1,所述N个编码子网络中的第一个编码子网络的输入包括所述第一编码解码网络或所述第二编码解码网络的输入,除了所述第一个编码子网络之外,所述第i+1个编码子网络的输入包括所述第i个编码子网络的下采样输出,所述第一编码特征图或所述第二编码特征图包括所述N个编码子网络的输出。
- 根据权利要求3所述的图像处理方法,其中,在N>2的情况下,所述解码元网络包括N-1个解码子网络、N-1个上采样层,所述N-1个解码子网络依次连接,所述N-1个上采样层包括第一上采样层和N-2个第二上采样层,所述第一上采样层用于连接所述N-1个解码子网络中的第1个解码子网络和所述N个编码子网络中的第N个编码子网络,每个第二上采样层用于连接相邻的两个解码子网络;所述解码元网络的解码处理包括:获取所述N-1个解码子网络中的第j个解码子网络的输入;使用所述第j个解码子网络对所述第j个解码子网络的输入进行处理,以得到所述第j个解码子网络的输出;其中,j为整数且1≤j≤N-1,所述第一编码解码网络或所述第二编码解码网络的输出包括所述N-1个解码子网络中的第N-1个解码子网络的输出;当j=1时,获取所述N-1个解码子网络中的第j个解码子网络的输入包括:利用所述第一上采样层对所述第N个编码子网络的输出进行上采样处理,以得到所述第j个解码子网络的上采样输入;将所述第j个解码子网络的上采样输入与所述N个编码子网络中的第N-j个编码子网络的输出进行联合,作为所述第j个解码子网络的输入;当1<j≤N-1时,获取所述N-1个解码子网络中的第j个解码子网络的输入包括:利用连接所述N-1个解码子网络中的第j个解码子网络和第j-1个解码子网络的第二上采样层对所述第j-1个解码子网络的输出进行上采样处理,以得到所述第j个解码子网络的上采样输入;将所述第j个解码子网络的上采样输入与所述N个编码子网络中的第N-j个编码子网络的输出进行联合,作为所述第j个解码子网络的输入。
- 根据权利要求4所述的图像处理方法,其中,所述第j个解码子网络的上采样输入的尺寸与所述第N-j个编码子网络的输出的尺寸相同,其中,1≤j≤N-1。
- 根据权利要求3所述的图像处理方法,其中,在N=2的情况下,所述编码元网络还包括第二个编码子网络,所述解码元网络包括第一个解码子网络、连接所述第一个解码子网络和所述第二个编码子网络的第一上采样层,所述解码元网络的解码处理包括:使用连接所述第一个解码子网络和所述第二个编码子网络的所述第一上采样层对所述第二个编码子网络的输出进行上采样处理,以得到所述第一个解码子网络的上采样输入;将所述第一个解码子网络的上采样输入与所述第一个编码子网络的输出进行联合,作为所述第一个解码子网络的输入,其中,所述第一解码子网络的上采样输入的尺寸与所述第一个编码子网络的输出的尺寸相同;使用所述第一个解码子网络对所述第一个解码子网络的输入进行处理,以得到所述第一个解码子网络的输出;其中,所述第一编码解码网络或所述第二编码解码网络的输出包括所述第一个解码子网络的输出。
- 根据权利要求4-6任一项所述的图像处理方法,其中,所述N个编码子网络和所述N-1个解码子网络中的每个子网络包括: 第一卷积模块和残差模块;每个子网络的处理包括:使用所述第一卷积模块对与所述第一卷积模块对应的子网络的输入进行处理,以得到第一中间输出;使用所述残差模块对所述第一中间输出进行残差处理,以得到所述子网络的输出。
- 根据权利要求7所述的图像处理方法,其中,所述残差模块包括多个第二卷积模块;使用所述残差模块对所述第一中间输出进行残差处理,以得到所述子网络的输出,包括:使用所述多个第二卷积模块对所述第一中间输出进行处理,以得到第二中间输出;以及将所述第一中间输出和所述第二中间输出进行残差连接相加处理,以得到所述子网络的输出。
- 根据权利要求8所述的图像处理方法,其中,所述第一卷积模块和所述多个第二卷积模块中的每一个的处理包括:卷积处理、激活处理和批量标准化处理。
- 根据权利要求4-9任一项所述的图像处理方法,其中,所述解码元网络中的每个解码子网络的输入和输出的尺寸相同,所述编码元网络中的每个编码子网络的输入和输出的尺寸相同。
- 根据权利要求2-10任一项所述的图像处理方法,其中,每个编码解码网络还包括融合模块;所述第一编码解码网络中的融合模块用于对所述第一输出特征图进行处理,以得到所述第一分割图像;使用所述第二编码解码网络对所述第二编码解码网络的输入进行分割处理,以得到所述第二分割图像包括:使用所述第二编码解码网络对所述第二编码解码网络的输入进行分割处理,以得到第二输出特征图;使用所述第二编码解码网络中的融合模块对所述第二输出特征图进行处理,以得到所述第二分割图像。
- 根据权利要求1-11任一项所述的图像处理方法,其中,所述第一 分割图像对应所述输入图像的第一区域,所述第二分割图像对应所述输入图像的第二区域,所述输入图像的所述第一区域包围所述输入图像的所述第二区域。
- 一种神经网络的训练方法,包括:获取训练输入图像;利用所述训练输入图像对待训练的神经网络进行训练,以得到根据权利要求1-12任一项所述的图像处理方法中的所述神经网络。
- 根据权利要求13所述的训练方法,其中,利用所述训练输入图像对待训练的神经网络进行训练,包括:使用所述待训练的神经网络对所述训练输入图像进行处理,以得到第一训练分割图像和第二训练分割图像;基于所述训练输入图像的第一参考分割图像和第二参考分割图像、以及所述第一训练分割图像和所述第二训练分割图像,通过***损失函数计算所述待训练的神经网络的***损失值;以及基于所述***损失值对所述待训练的神经网络的参数进行修正;其中,所述第一训练分割图像与所述第一参考分割图像对应,所述第二训练分割图像与所述第二参考分割图像对应。
- 根据权利要求14所述的训练方法,其中,所述***损失函数包括第一分割损失函数和第二分割损失函数;所述第一分割损失函数和所述第二分割损失函数中的每个分割损失函数包括:交叉损失函数和相似性损失函数。
- 根据权利要求15所述的训练方法,其中,所述第一分割损失函数表示为:L 01=λ 11·L 11+λ 12·L 21其中,L 01表示所述第一分割损失函数,L 11表示在所述第一分割损失函数中的交叉损失函数,λ 11表示在所述第一分割损失函数中的交叉损失函数的权重,L 21表示在所述第一分割损失函数中的相似性损失函数,λ 12表示在所述第一分割损失函数中的相似性损失函数的权重;所述第一分割损失函数中的交叉损失函数L 11表示为:所述第一分割损失函数中的相似性损失函数L 21表示为:其中,x m1n1表示所述第一训练分割图像中位于m1行n1列的像素的值,y m1n1表示所述第一参考分割图像中位于m1行n1列的像素的值;所述第二分割损失函数表示为:L 02=λ 21·L 12+λ 22·L 22其中,L 02表示所述第二分割损失函数,L 12表示在所述第二分割损失函数中的交叉损失函数,λ 21表示在所述第二分割损失函数中所述交叉损失函数的权重,L 22表示在所述第二分割损失函数中的相似性损失函数,λ 22表示在所述第二分割损失函数中所述相似性损失函数的权重,所述第二分割损失函数中的交叉损失函数L 12表示为:所述第二分割损失函数中的相似性损失函数L 22表示为:其中,x m2n2表示所述第二训练分割图像中位于m2行n2列的像素的值,y m2n2表示所述第二参考分割图像中位于m2行n2列的像素的值。
- 根据权利要求15或16所述的训练方法,其中,所述***损失函数表示为:L=λ 01·L 01+λ 02·L 02其中,L 01和L 02分别表示所述第一分割损失函数和所述第二分割损失函数,λ 01和λ 02分别表示在所述***损失函数中所述第一分割损失函数和所述第二分割损失函数的权重。
- 根据权利要求13-17任一项所述的训练方法,其中,获取所述训练输入图像,包括:获取原始训练输入图像;以及,对所述原始训练输入图像进行预处理和数据增强处理,以得到所述训练输入图像。
- 一种图像处理装置,包括:存储器,用于存储非暂时性计算机可读指令;以及处理器,用于运行所述计算机可读指令,所述计算机可读指令被所述处理器运行时执行根据权利要求1-12任一项所述的图像处理方法或执行根据权利要求13-18任一项所述的训练方法。
- 一种存储介质,非暂时性地存储计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时可以执行根据权利要求1-12任一项所述的图像处理方法的指令或可以执行根据权利要求13-18任一项所述的训练方法的指令。
- 一种神经网络,包括:两个编码解码网络和联合层,所述两个编码解码网络包括第一编码解码网络和第二编码解码网络;其中,所述第一编码网络被配置为对输入图像进行分割处理,以得到第一输出特征图和第一分割图像;所述联合层被配置为将所述第一输出特征图与所述输入图像和所述第一分割图像至少之一进行联合,以得到所述第二编码解码网络的输入;所述第二编码解码网络被配置为对所述第二编码解码网络的输入进行分割处理,以得到所述第二分割图像。
- 根据权利要求21所述的神经网络,其中,所述两个编码解码网络中的每个编码解码网络包括编码元网络和解码元网络;所述第一编码解码网络的编码元网络被配置为对所述输入图像进行编码处理,以得到第一编码特征图;所述第一编码解码网络的解码元网络被配置为对所述第一编码特征图进行解码处理,以得到所述第一编码解码网络的输出,所述第一编码解码网络的输出包括所述第一分割图像;所述第二编码解码网络的编码元网络被配置为对所述第二编码解码网络的输入进行编码处理,以得到第二编码特征图;所述第二编码解码网络的解码元网络被配置为对所述第二编码特征图进行解码处理,以得到所述第二编码解码网络的输出,所述第二编码解码网络的输出包括所述第二分割图像。
- 根据权利要求22所述的神经网络,其中,所述编码元网络包括N个编码子网络和N-1个下采样层,所述N个编码子网络依次连接,每个下采样层用于连接相邻的两个编码子网络,N为整数且N≥2;所述N个编码子网络中的第i个编码子网络被配置为对所述第i个编码子网络的输入进行处理,以得到所述第i个编码子网络的输出;连接所述第i个编码子网络和所述N个编码子网络中的第i+1个编码子网络的下采样层被配置为对所述第i个编码子网络的输出进行下采样处理,以得到所述第i个编码子网络的下采样输出;所述第i+1个编码子网络被配置为对所述第i个编码子网络的下采样输出进行处理,以得到所述第i+1个编码子网络的输出;其中,i为整数且1≤i≤N-1,所述N个编码子网络中的第一个编码子网络的输入包括所述第一编码解码网络或所述第二编码解码网络的输入,除了所述第一个编码子网络之外,所述第i+1个编码子网络的输入包括所述第i个编码子网络的下采样输出,所述第一编码特征图或所述第二编码特征图包括所述N个编码子网络的输出。
- 根据权利要求23所述的神经网络,其中,在N>2的情况下,所述解码元网络包括N-1个解码子网络、N-1个上采样层,所述N-1个解码子网络依次连接,所述N-1个上采样层包括第一上采样层和N-2个第二上采样层,所述第一上采样层用于连接所述N-1个解码子网络中的第1个解码子网络和所述N个编码子网络中的第N个编码子网络,每个第二上采样层用于连接相邻的两个解码子网络;每个编码解码网络还包括与所述解码元网络的N-1个解码子网络对应的N-1个子联合层;所述N-1个解码子网络中的第j个解码子网络被配置为对所述第j个解码子网络的输入进行处理,以得到所述第j个解码子网络的输出,其中,j为整数且1≤j≤N-1,所述第一编码解码网络或所述第二编码解码网络的输出包括所述N-1个解码子网络中的第N-1个解码子网络的输出;所述第一上采样层被配置为对所述第N个编码子网络的输出进行上采样处理,以得到所述第一个解码子网络的上采样输入;连接所述N-1个解码子网络中的第j个解码子网络和第j-1个解码子网络的第二上采样层被配置为对所述第j-1个解码子网络的输出进行上采样处理,以得到所述第j个解码子网络的上采样输入,其中,j为整数且1<j≤N-1;所述N-1个子联合层中的第j个子联合层被配置为将所述第j个解码 子网络的上采样输入与所述N个编码子网络中的第N-j个编码子网络的输出进行联合,作为所述第j个解码子网络的输入,其中,j为整数且1≤j≤N-1。
- 根据权利要求24所述的神经网络,其中,所述第j个解码子网络的上采样输入的尺寸与所述第N-j个编码子网络的输出的尺寸相同,其中,1≤j≤N-1。
- 根据权利要求23所述的神经网络,其中,在N=2的情况下,所述编码元网络还包括第二个编码子网络,所述解码元网络包括第一个解码子网络、连接所述第一个解码子网络和所述第二个编码子网络的第一上采样层,每个编码解码网络还包括与所述解码元网络的第一个解码子网络对应的第一个子联合层;连接所述第一个解码子网络和所述第二个编码子网络的所述第一上采样层被配置为对所述第二个编码子网络的输出进行上采样处理,以得到所述第一个解码子网络的上采样输入;所述第一个子联合层被配置为将所述第一个解码子网络的上采样输入与所述第一个编码子网络的输出进行联合,作为所述第一个解码子网络的输入,其中,所述第一解码子网络的上采样输入的尺寸与所述第一个编码子网络的输出的尺寸相同;所述第一个解码子网络被配置为对所述第一个解码子网络的输入进行处理,以得到所述第一个解码子网络的输出;其中,所述第一编码解码网络或所述第二编码解码网络的输出包括所述第一个解码子网络的输出。
- 根据权利要求24-26任一项所述的神经网络,其中,所述N个编码子网络和所述N-1个解码子网络中的每个子网络包括:第一卷积模块和残差模块;所述第一卷积模块被配置为对与所述第一卷积模块对应的子网络的输入进行处理,以得到第一中间输出;所述残差模块被配置为对所述第一中间输出进行残差处理,以得到所述子网络的输出。
- 根据权利要求27所述的神经网络,其中,所述残差模块包括多 个第二卷积模块和残差相加层;所述多个第二卷积模块被配置为对所述第一中间输出进行处理,以得到第二中间输出;所述残差相加层被配置为将所述第一中间输出和所述第二中间输出进行残差连接相加处理,以得到所述子网络的输出。
- 根据权利要求28所述的神经网络,其中,所述第一卷积模块和所述多个第二卷积模块中的每一个包括:卷积层、激活层和批量标准化层;所述卷积层被配置为进行卷积处理,所述激活层被配置为进行激活处理,所述批量标准化层被配置为进行批量标准化处理。
- 根据权利要求24-29任一项所述的神经网络,其中,所述解码元网络中的每个解码子网络的输入和输出的尺寸相同,所述编码元网络中的每个编码子网络的输入和输出的尺寸相同。
- 根据权利要求22-30任一项所述的神经网络,其中,每个编码解码网络还包括融合模块;所述第一编码解码网络中的融合模块被配置为对所述第一输出特征图进行处理,以得到所述第一分割图像;所述第二编码解码网络被配置为对所述第二编码解码网络的输入进行分割处理,以得到所述第二分割图像,包括:所述第二编码解码网络被配置为对所述第二编码解码网络的输入进行分割处理,以得到第二输出特征图;所述第二编码解码网络中的融合模块被配置为对所述第二输出特征图进行处理,以得到所述第二分割图像。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/098928 WO2021017006A1 (zh) | 2019-08-01 | 2019-08-01 | 图像处理方法及装置、神经网络及训练方法、存储介质 |
CN201980001232.XA CN112602114A (zh) | 2019-08-01 | 2019-08-01 | 图像处理方法及装置、神经网络及训练方法、存储介质 |
US16/970,131 US11816870B2 (en) | 2019-08-01 | 2019-08-01 | Image processing method and device, neural network and training method thereof, storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/098928 WO2021017006A1 (zh) | 2019-08-01 | 2019-08-01 | 图像处理方法及装置、神经网络及训练方法、存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021017006A1 true WO2021017006A1 (zh) | 2021-02-04 |
Family
ID=74228505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/098928 WO2021017006A1 (zh) | 2019-08-01 | 2019-08-01 | 图像处理方法及装置、神经网络及训练方法、存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11816870B2 (zh) |
CN (1) | CN112602114A (zh) |
WO (1) | WO2021017006A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256670A (zh) * | 2021-05-24 | 2021-08-13 | 推想医疗科技股份有限公司 | 图像处理方法及装置、网络模型的训练方法及装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785575B (zh) * | 2021-01-25 | 2022-11-18 | 清华大学 | 一种图像处理的方法、装置和存储介质 |
CN113658165B (zh) * | 2021-08-25 | 2023-06-20 | 平安科技(深圳)有限公司 | 杯盘比确定方法、装置、设备及存储介质 |
TWI784688B (zh) * | 2021-08-26 | 2022-11-21 | 宏碁股份有限公司 | 眼睛狀態評估方法及電子裝置 |
CN114708973B (zh) * | 2022-06-06 | 2022-09-13 | 首都医科大学附属北京友谊医院 | 一种用于对人体健康进行评估的设备和存储介质 |
CN116612146B (zh) * | 2023-07-11 | 2023-11-17 | 淘宝(中国)软件有限公司 | 图像处理方法、装置、电子设备以及计算机存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240235A1 (en) * | 2017-02-23 | 2018-08-23 | Zebra Medical Vision Ltd. | Convolutional neural network for segmentation of medical anatomical images |
CN109598728A (zh) * | 2018-11-30 | 2019-04-09 | 腾讯科技(深圳)有限公司 | 图像分割方法、装置、诊断***及存储介质 |
US20190114774A1 (en) * | 2017-10-16 | 2019-04-18 | Adobe Systems Incorporated | Generating Image Segmentation Data Using a Multi-Branch Neural Network |
CN109859210A (zh) * | 2018-12-25 | 2019-06-07 | 上海联影智能医疗科技有限公司 | 一种医学数据处理装置及方法 |
CN109993726A (zh) * | 2019-02-21 | 2019-07-09 | 上海联影智能医疗科技有限公司 | 医学图像的检测方法、装置、设备和存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IE87469B1 (en) * | 2016-10-06 | 2024-01-03 | Google Llc | Image processing neural networks with separable convolutional layers |
CN109493347B (zh) * | 2017-09-12 | 2021-03-23 | 深圳科亚医疗科技有限公司 | 在图像中对稀疏分布的对象进行分割的方法和*** |
JP7250489B2 (ja) * | 2018-11-26 | 2023-04-03 | キヤノン株式会社 | 画像処理装置およびその制御方法、プログラム |
CN110009598B (zh) * | 2018-11-26 | 2023-09-05 | 腾讯科技(深圳)有限公司 | 用于图像分割的方法和图像分割设备 |
US11328430B2 (en) * | 2019-05-28 | 2022-05-10 | Arizona Board Of Regents On Behalf Of Arizona State University | Methods, systems, and media for segmenting images |
US11672503B2 (en) * | 2021-08-20 | 2023-06-13 | Sonic Incytes Medical Corp. | Systems and methods for detecting tissue and shear waves within the tissue |
-
2019
- 2019-08-01 US US16/970,131 patent/US11816870B2/en active Active
- 2019-08-01 CN CN201980001232.XA patent/CN112602114A/zh active Pending
- 2019-08-01 WO PCT/CN2019/098928 patent/WO2021017006A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240235A1 (en) * | 2017-02-23 | 2018-08-23 | Zebra Medical Vision Ltd. | Convolutional neural network for segmentation of medical anatomical images |
US20190114774A1 (en) * | 2017-10-16 | 2019-04-18 | Adobe Systems Incorporated | Generating Image Segmentation Data Using a Multi-Branch Neural Network |
CN109598728A (zh) * | 2018-11-30 | 2019-04-09 | 腾讯科技(深圳)有限公司 | 图像分割方法、装置、诊断***及存储介质 |
CN109859210A (zh) * | 2018-12-25 | 2019-06-07 | 上海联影智能医疗科技有限公司 | 一种医学数据处理装置及方法 |
CN109993726A (zh) * | 2019-02-21 | 2019-07-09 | 上海联影智能医疗科技有限公司 | 医学图像的检测方法、装置、设备和存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256670A (zh) * | 2021-05-24 | 2021-08-13 | 推想医疗科技股份有限公司 | 图像处理方法及装置、网络模型的训练方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US20220398783A1 (en) | 2022-12-15 |
US11816870B2 (en) | 2023-11-14 |
CN112602114A (zh) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021017006A1 (zh) | 图像处理方法及装置、神经网络及训练方法、存储介质 | |
WO2021164429A1 (zh) | 图像处理方法、图像处理装置及设备 | |
CN109754402B (zh) | 图像处理方法、图像处理装置以及存储介质 | |
US10706333B2 (en) | Medical image analysis method, medical image analysis system and storage medium | |
WO2020177651A1 (zh) | 图像分割方法和图像处理装置 | |
US11488021B2 (en) | Systems and methods for image segmentation | |
CN113706526B (zh) | 内窥镜图像特征学习模型、分类模型的训练方法和装置 | |
CN111784671B (zh) | 基于多尺度深度学习的病理图像病灶区域检测方法 | |
CN110163260B (zh) | 基于残差网络的图像识别方法、装置、设备及存储介质 | |
KR102058884B1 (ko) | 치매를 진단을 하기 위해 홍채 영상을 인공지능으로 분석하는 방법 | |
EP3923233A1 (en) | Image denoising method and apparatus | |
WO2020108562A1 (zh) | 一种ct图像内的肿瘤自动分割方法及*** | |
JP2021513697A (ja) | 完全畳み込みニューラル・ネットワークを用いた心臓ctaにおける解剖学的構造のセグメンテーションのためのシステム | |
WO2023070447A1 (zh) | 模型训练方法、图像处理方法、计算处理设备及非瞬态计算机可读介质 | |
CN112396605B (zh) | 网络训练方法及装置、图像识别方法和电子设备 | |
WO2021168920A1 (zh) | 基于多剂量等级的低剂量图像增强方法、***、计算机设备及存储介质 | |
WO2024011835A1 (zh) | 一种图像处理方法、装置、设备及可读存储介质 | |
CN111951281A (zh) | 图像分割方法、装置、设备及存储介质 | |
CN110570394A (zh) | 医学图像分割方法、装置、设备及存储介质 | |
CN115471470A (zh) | 一种食管癌ct图像分割方法 | |
WO2020187029A1 (zh) | 图像处理方法及装置、神经网络的训练方法、存储介质 | |
CN112419283A (zh) | 估计厚度的神经网络及其方法 | |
US20220392059A1 (en) | Method and system for representation learning with sparse convolution | |
WO2024016691A1 (zh) | 一种图像检索方法、模型训练方法、装置及存储介质 | |
WO2022183325A1 (zh) | 视频块处理方法及装置、神经网络的训练方法和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19939210 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939210 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939210 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/02/2023) |