CN112381834A - Labeling method for image interactive instance segmentation - Google Patents

Labeling method for image interactive instance segmentation Download PDF

Info

Publication number
CN112381834A
CN112381834A CN202011145197.XA CN202011145197A CN112381834A CN 112381834 A CN112381834 A CN 112381834A CN 202011145197 A CN202011145197 A CN 202011145197A CN 112381834 A CN112381834 A CN 112381834A
Authority
CN
China
Prior art keywords
region
subdivided
target instance
image
regions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011145197.XA
Other languages
Chinese (zh)
Other versions
CN112381834B (en
Inventor
李融
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202011145197.XA priority Critical patent/CN112381834B/en
Publication of CN112381834A publication Critical patent/CN112381834A/en
Application granted granted Critical
Publication of CN112381834B publication Critical patent/CN112381834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an annotation method for image interactive instance segmentation, which comprises the step of S1, constructing a to-be-processed set E and an annotated set CiAnd initializing; s2, the set E is divided by an interactive image division algorithm to form a subdivided region RkAnd E is set to be empty; s3, dividing the region R into sub-regionskPut into the set E and the set C respectivelyiEach subdivided region is placed in only one set; if the region R is subdividedkIs completely contained by the ith target instance object area, then R is addedkPut into set CiPerforming the following steps; if the region R is subdividedkHaving intersecting regions with any one of the target instance regions that are not empty and the intersecting regions not equal to RkThen R iskPutting the mixture into a set E; s4, when the set E is not empty, repeating the steps S2 and S3 untilThe set E is an empty set; s5, collecting the set CiAll the subdivided areas in the target are merged, and each pixel in the merged area is marked as the ith target instance.

Description

Labeling method for image interactive instance segmentation
Technical Field
The invention relates to the technical field of computer image processing, in particular to an annotation method for image interactive instance segmentation.
Background
Image instance segmentation is an important research effort in the field of artificial intelligence computer vision. This work provides pixel-level instance differentiation for image frames, enabling understanding of image content. In the current stage, the artificial intelligence algorithm based on deep learning needs a large amount of image labels at pixel level when an instance/semantic segmentation model is trained, and has wide application requirements in numerous fields including unmanned driving, intelligent matting and the like. The traditional labeling method for carrying out example segmentation on the image mainly comprises the step of manually outlining a pixel-level outline of a specific individual in the image to finally form full-image pixel-level labeling. The method requires a long-time and high-precision operation of a marking person. When the image content is complex and the data size is large, a large amount of manual operation time is consumed.
Disclosure of Invention
In order to solve the defects of the prior art and achieve the purpose of reducing the labor cost of outline delineation, the invention adopts the following technical scheme:
an annotation method for image interactive instance segmentation comprises the following steps:
s1, setting the input image as I, setting N target examples to be marked on the image, and constructing a set E and a set CiN, each element of the set E representing an image area to be processed on the image I, the set CiEach element in the set represents an image area which is marked and belongs to the ith target instance area, the initial set E has only one element and is the whole image area of the image I, and the initial set CiIs an empty set;
s2, dividing all the image areas to be processed in the set E through an interactive image division algorithm to form K subdivided areas RkK =1,2, …, K, after which E is left empty;
s3, dividing the region R into sub-regionskPut into the set E and the set C respectivelyiThe conditions for putting into the collection are as follows: each subdivided region is placed in only one set; if the region R is subdividedkIs completely contained by the ith target instance object area, then R is addedkPut into set CiPerforming the following steps; if the region R is subdividedkHaving intersecting regions with any one of the target instance regions that are not empty and the intersecting regions not equal to RkThen R iskPutting the mixture into a set E;
s4, when the set E is not empty, repeating the steps S2 and S3 until the set E is an empty set;
s5, for each i e { 1.. N }, a set C is collectediAnd merging all the subdivided regions in the image, wherein the merged region represents the region of the ith target instance, and labeling each pixel in the merged region as the ith target instance.
And step S2, when the area of the region to be processed is greater than or equal to the preset threshold, subdividing the region by using an automatic image segmentation algorithm, where the automatic image segmentation algorithm has a set of subdivision parameters, the number of the output subdivided regions is controlled by setting the subdivision parameters, and when the area of the region to be processed is smaller than the preset threshold, the boundary of the target instance in the region to be processed is identified by interactive operation, so as to segment the subdivided region belonging to the target instance.
The subdivision parameters are adjusted until the local boundary error of the boundary of the subdivided region from the target instance is less than a pre-specified error threshold.
In the step S3, the region R is subdivided1,R2,...,RKPut into the sets C respectively1,C2,...,CNThe method comprises the following steps:
s311, selecting the ith target instance;
s312, putting the subdivided regions positioned in the ith target instance into a set CiIn the method, a group of tracks are interactively drawn in the ith target instance region, and the subdivided regions which are intersected with the tracks and completely fall in the target instance region are put into a set CiPerforming the following steps;
s313, if all the subdivided areas located inside the ith target instance area are put into the set CiThen, the process is ended, otherwise, the step S312 is repeated.
Step S312, selecting a pixel in the target instance region, setting a growth threshold, performing region growth from the pixel according to a region growth algorithm, where the generated region includes a plurality of subdivided regions, and setting the plurality of subdivided regions as candidate subdivided regions, and if the candidate subdivided region is completely in the ith target instance region, setting the candidate subdivided region as a candidate subdivided regionThese regions are put into the set Ci. If the candidate subdivision regions exceed or are not completely inside the ith target instance, interactively identifying the regions, and putting the candidate subdivision regions which are not identified into the set Ci
In the step S3, the region R is subdivided1,R2,...,RKRespectively putting the sets E into the sets E, and comprising the following steps:
s321, selecting the ith target instance;
s322, drawing a group of tracks, enabling the tracks to pass through the subdivision area where the ith target instance area outline is located, and putting the subdivision areas intersected with the tracks into a set E;
and S323, when all the subdivided regions intersected with the ith target instance are in the set E and the intersected parts are not equal to the subdivided regions, ending, otherwise, repeating the step S322.
The invention has the advantages and beneficial effects that:
the method has the advantages that through an interactive image segmentation algorithm, set classification is combined, circulation operation is carried out, and required instance segmentation labeling is completed in an auxiliary mode by utilizing the computing power of a computer, so that the labor cost of traditional labeling is reduced, the degree of uncertainty is reduced, the labeling precision is improved, and the working efficiency is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is an exemplary diagram of forming candidate inner subdivided regions in the present invention.
FIG. 3 is an exemplary diagram of finding the interior regions of several example objects from a drawn trajectory in accordance with the present invention.
FIG. 4 is an exemplary diagram of a subdivided region to be segmented in which the outlines of a plurality of example objects are obtained according to a drawn track.
FIG. 5 is an exemplary diagram of an internal segment map of several target objects obtained after adjusting parameters in the present invention.
FIG. 6 is an exemplary diagram of the results of an example segmentation annotation in accordance with the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, an annotation method for interactive instance segmentation of an image includes the following steps:
step one, an input image is set as I, as shown in figure 2, 2 target sheep examples to be labeled are arranged on the image, a set E and a set C are constructedi(where I =1, 2), where each element in the set E represents an image area to be processed on the image I, the set C1Including the subdivided regions of the left-hand annotated image, set C2Including the image subdivision areas in the right sheep where labeling has been completed. The initial set E has only one element and is the whole image area of the image I, and the initial set C1,C2Is an empty set;
secondly, all the image areas to be processed in the set E are segmented through an interactive image segmentation algorithm to form K subdivided areas R1,R2,...,RKAnd thereafter, E is left empty. As shown in fig. 2, an automatic image segmentation algorithm is adopted to segment the image area to be processed, segmentation parameters are manually set, the number of the output segmented areas is controlled by setting the segmentation parameters, and the matching degree of the edge contour and the segmentation algorithm of the local parts of the two sheep is observed. When the edge of the subdivided region divided by the algorithm is observed to be matched with the local part of the sheep, the subdivision is stopped, K subdivided regions are formed at the moment, and the subdivided regions are schematically divided by using a dividing line in the figure.
Third, the subdivided regions R are interactively subdivided1,R2,...,RKPut into corresponding sets C respectivelyiIn (1). As shown in fig. 2, the left sheep is selected, a pixel is selected in the left sheep, a growth threshold is set, region growth is performed from the pixel according to a region growth algorithm, the generated region includes a plurality of subdivided regions (highlighted portions in fig. 2), and the subdivided regions are set as candidate subdivided regions. Observing these candidate subdivided regions, having been completely inside the left sheep region, these regions are put into the set C1. However, the device is not suitable for use in a kitchenThen, as shown in fig. 3, a track is interactively drawn in the left sheep region, a subdivided region which is intersected with the track and completely falls into the left sheep is selected, and the subdivided region is continuously put into the set C1In (1). These two operations are repeated until the inner subdivided area of the left sheep is filled with C1In (1). Selecting right sheep, and performing similar operation until the inner subdivision region is placed in C2In (1).
Interactively putting the remaining subdivided regions on into the set E. Specifically, as shown in fig. 4, the left sheep is selected first, and a trajectory is drawn. The trajectory crosses the left sheep partial contour, and the subdivided regions intersecting the trajectory contain partial sheep contour inner pixels and partial sheep contour outer pixels. These regions are added to set E. Similar operations are repeated until the subdivided areas which are locally intersected with any sheep are in the set E, and the subdivided areas need to satisfy the following conditions: there is an intersection with the sheep and the intersection is not equal to the subdivided region itself.
And fourthly, repeating the second step and the third step n times when the E is not empty until the set E is an empty set. Specifically, as shown in fig. 5, in the area E, the left leg is further subdivided locally, and by drawing a track, some detail internal areas of the sheep are precisely selected and placed in the area C1In (1). Similar operations continue until E is empty.
Fifthly, for each i E { 1.. N }, a set C is collectediAll the areas in the image are merged, the merged result area is the area of the ith target instance, and each pixel in the area is marked as the ith target instance. Specifically, as shown in FIG. 6, for C1The inner areas are merged, and the merged pixel area is used for segmenting and labeling the example of the left sheep; to C2The inner regions are merged, and the merged pixel region is used for segmenting and labeling the examples of the right sheep.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. An annotation method for interactive instance segmentation of images, characterized by comprising the steps of:
s1, setting the input image as I, setting N target examples to be marked on the image, and constructing a set E and a set CiN, each element of the set E representing an image area to be processed on the image I, the set CiEach element in the set represents an image area which is marked and belongs to the ith target instance area, the initial set E has only one element and is the whole image area of the image I, and the initial set CiIs an empty set;
s2, dividing all the image areas to be processed in the set E through an interactive image division algorithm to form K subdivided areas RkK =1,2, …, K, after which E is left empty;
s3, dividing the region R into sub-regionskPut into the set E and the set C respectivelyiThe conditions for putting into the collection are as follows: each subdivided region is placed in only one set; if the region R is subdividedkIs completely contained by the ith target instance object area, then R is addedkPut into set CiPerforming the following steps; if the region R is subdividedkHaving intersecting regions with any one of the target instance regions that are not empty and the intersecting regions not equal to RkThen R iskPutting the mixture into a set E;
s4, when the set E is not empty, repeating the steps S2 and S3 until the set E is an empty set;
s5, for each i e { 1.. N }, a set C is collectediAnd merging all the subdivided regions in the image, wherein the merged region represents the region of the ith target instance, and labeling each pixel in the merged region as the ith target instance.
2. The method as claimed in claim 1, wherein in step S2, when the area of the region to be processed is greater than or equal to the preset threshold, the region is subdivided by using an automatic image segmentation algorithm, the automatic image segmentation algorithm has a set of segmentation parameters, the number of the subdivided regions is controlled and output by setting the segmentation parameters, and when the area of the region to be processed is smaller than the preset threshold, the boundary of the target instance in the region to be processed is identified by interactive operation, so as to segment the subdivided region belonging to the target instance.
3. The method of claim 2, wherein the subdivision parameters are adjusted until the local boundary error between the boundary of the subdivided region and the target instance is less than a pre-specified error threshold.
4. The method for labeling interactive instance segmentation of images as claimed in claim 1, wherein said step S3 is to subdivide the region R1,R2,...,RKPut into the sets C respectively1,C2,...,CNThe method comprises the following steps:
s311, selecting the ith target instance;
s312, putting the subdivided regions positioned in the ith target instance into a set CiIn the method, a group of tracks are interactively drawn in the ith target instance region, and the subdivided regions which are intersected with the tracks and completely fall in the target instance region are put into a set CiPerforming the following steps;
s313, if all the subdivided areas located inside the ith target instance area are put into the set CiThen, the process is ended, otherwise, the step S312 is repeated.
5. The method as claimed in claim 4, wherein in step S312, a pixel is selected in the target instance region, a growing threshold is set, region growing is performed from the pixel according to a region growing algorithm, and the generated region comprises a plurality of subdivisionsSetting a plurality of subdivided areas as candidate subdivided areas, and putting the candidate subdivided areas into the set C if the candidate subdivided areas are completely inside the ith target instance areai
6. The method for labeling interactive instance segmentation of images as claimed in claim 1, wherein said step S3 is to subdivide the region R1,R2,...,RKRespectively putting the sets E into the sets E, and comprising the following steps:
s321, selecting the ith target instance;
s322, drawing a group of tracks, enabling the tracks to pass through the subdivision area where the ith target instance area outline is located, and putting the subdivision areas intersected with the tracks into a set E;
and S323, when all the subdivided regions intersected with the ith target instance are in the set E and the intersected parts are not equal to the subdivided regions, ending, otherwise, repeating the step S322.
CN202011145197.XA 2021-01-08 2021-01-08 Labeling method for image interactive instance segmentation Active CN112381834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011145197.XA CN112381834B (en) 2021-01-08 2021-01-08 Labeling method for image interactive instance segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011145197.XA CN112381834B (en) 2021-01-08 2021-01-08 Labeling method for image interactive instance segmentation

Publications (2)

Publication Number Publication Date
CN112381834A true CN112381834A (en) 2021-02-19
CN112381834B CN112381834B (en) 2022-06-03

Family

ID=74581783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011145197.XA Active CN112381834B (en) 2021-01-08 2021-01-08 Labeling method for image interactive instance segmentation

Country Status (1)

Country Link
CN (1) CN112381834B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949316A (en) * 2019-03-01 2019-06-28 东南大学 A kind of Weakly supervised example dividing method of grid equipment image based on RGB-T fusion
CN110084821A (en) * 2019-04-17 2019-08-02 杭州晓图科技有限公司 A kind of more example interactive image segmentation methods
CN110097564A (en) * 2019-04-04 2019-08-06 平安科技(深圳)有限公司 Image labeling method, device, computer equipment and storage medium based on multi-model fusion
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949316A (en) * 2019-03-01 2019-06-28 东南大学 A kind of Weakly supervised example dividing method of grid equipment image based on RGB-T fusion
CN110097564A (en) * 2019-04-04 2019-08-06 平安科技(深圳)有限公司 Image labeling method, device, computer equipment and storage medium based on multi-model fusion
CN110084821A (en) * 2019-04-17 2019-08-02 杭州晓图科技有限公司 A kind of more example interactive image segmentation methods
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADITYA ARUM 等: "Weakly Supervised Instance Segmentation by Learning Annotation Consistent Instances", 《ARXIV:2007.09397V1》 *
曹小鹏等: "GPU加速的交互式医学CT图像区域分割", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
CN112381834B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN109035255B (en) Method for segmenting aorta with interlayer in CT image based on convolutional neural network
CN104537676B (en) Gradual image segmentation method based on online learning
CN109741347A (en) A kind of image partition method of the iterative learning based on convolutional neural networks
CN102651128B (en) Image set partitioning method based on sampling
CN111259936B (en) Image semantic segmentation method and system based on single pixel annotation
CN109087303A (en) The frame of semantic segmentation modelling effect is promoted based on transfer learning
WO2020029915A1 (en) Artificial intelligence-based device and method for tongue image splitting in traditional chinese medicine, and storage medium
CN109559328B (en) Bayesian estimation and level set-based rapid image segmentation method and device
Sadanandan et al. Segmentation and track-analysis in time-lapse imaging of bacteria
CN105279768B (en) Variable density tracking cell method based on multi-mode Ant ColonySystem
CN111598925A (en) Visual target tracking method and device based on ECO algorithm and region growth segmentation
CN117670895B (en) Immunohistochemical pathological image cell segmentation method based on section re-staining technology
CN108986109A (en) A kind of serializing viewing human sectioning image automatic division method
CN114862800A (en) Semi-supervised medical image segmentation method based on geometric consistency constraint
CN112381834B (en) Labeling method for image interactive instance segmentation
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN108898601A (en) Femoral head image segmentation device and dividing method based on random forest
CN102509296A (en) Maximum-likelihood-region-merging-based interactive segmentation method for stomach computed tomography (CT) image
CN109993773B (en) Multi-target tracking method and device for series section images
CN111897333A (en) Robot walking path planning method
Zhou et al. Automatic segmentation of lung noudles using improved U-Net NetWork
CN117292217A (en) Skin typing data augmentation method and system based on countermeasure generation network
CN112862789B (en) Interactive image segmentation method based on machine learning
CN108961300B (en) Image segmentation method and device
CN111428734B (en) Image feature extraction method and device based on residual countermeasure inference learning and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant