CN110288617B - Automatic human body slice image segmentation method based on shared matting and ROI gradual change - Google Patents
Automatic human body slice image segmentation method based on shared matting and ROI gradual change Download PDFInfo
- Publication number
- CN110288617B CN110288617B CN201910600636.2A CN201910600636A CN110288617B CN 110288617 B CN110288617 B CN 110288617B CN 201910600636 A CN201910600636 A CN 201910600636A CN 110288617 B CN110288617 B CN 110288617B
- Authority
- CN
- China
- Prior art keywords
- pixel
- area
- foreground
- point
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human body slice image automatic segmentation method based on shared matting and ROI gradual change, which comprises the following steps: s1: carrying out rough scrawling on the region of interest; s2: selecting an internal area A of the region of interest; s3: covering the region of interest and reading the boundary region of the region of interest; s4: defining a foreground area of the inner area A as a white part, a middle part as a gray part and a background area as a black part, and obtaining a trimap image of the interested area; s5: obtaining a black-white mask of the region of interest by adopting a shared matting algorithm according to the tripartite image information, and obtaining a foreground skeleton of the black-white mask by adopting a refining algorithm; s6: taking the foreground framework as the foreground of the next picture to be scrawled, and obtaining a trisection picture of the next picture by adopting a flooding algorithm; s7: and (5) circulating S5 to S6 until all the pictures are processed to obtain black and white masks of all the picture interested areas.
Description
Technical Field
The invention relates to the technical field of image segmentation, in particular to a human slice image automatic segmentation method based on shared matting and ROI gradual change.
Background
Image segmentation, which aims to extract foreground elements in an image by color and opacity, is one of the main means of medical image segmentation. Although there have been many advances in improving the accuracy of matting techniques in recent years, there are still some common problems, such as image extraction using alpha matting method in the prior art, but it is necessary to give three-segment images of all images. In alpha segmentation, each unknown pixel searches the whole circular area with the unknown pixel as the center, the shared matting algorithm divides a circle into a plurality of sectors, and adjacent pixels search non-overlapping sector areas, so that the processing efficiency is improved. However, this method is used to segment a plurality of images that are sequenced, and therefore, errors are generated and accumulated in the segmentation process, and the segmentation result is not very accurate.
Disclosure of Invention
According to the problems in the prior art, the invention discloses a human body slice image automatic segmentation method based on shared matting and ROI gradual change, which specifically comprises the following steps:
s1: reading an interested area of the image, and carrying out rough scrawling on the interested area;
s2: selecting an internal area A of the region of interest, calculating a color difference range between the internal area A and a graffiti pixel, and recording a maximum color negative difference value loDiff1 between the graffiti pixel and the internal area A pixel and a maximum color positive difference value upDiff1 between the graffiti pixel and the internal area A pixel;
s3: covering the interested area and reading the boundary area of the interested area, defining the sum of the internal area A and the boundary area as an area B, searching the color difference range of the area B and the graffiti pixel, and recording the maximum value of negative color difference loDiff2 between the graffiti pixel and the area B pixel and the maximum value of positive color difference Difupf 2 between the graffiti pixel and the area B pixel;
s4: defining an internal area A as a white part, defining a part of an area B which is more than the area A as a gray part, and defining an area which is not contained in the area B as a black part to obtain a whole picture trisection image;
s5: obtaining a black-and-white mask of the region of interest by adopting a shared matting algorithm according to the trimap image information, and obtaining a foreground framework of the black-and-white mask by adopting a parallel refinement algorithm;
s6: taking the foreground framework as the foreground graffiti of the next picture, and obtaining a trimap image of the next picture by adopting a flooding algorithm according to the data of loDiff1, and upDiff1, loDiff2 and upDiff2 obtained in S2 and S3;
s7: and (5) circulating S5 to S6 until all the pictures are processed to obtain black and white masks of all the picture interested areas.
In the step S5, the black-and-white mask of the region of interest is obtained by using the shared matting algorithm according to the trimap image information in the following specific manner:
s51: setting a maximum quantity parameter K of foreground pixels and background pixels, extracting K paths from each unknown pixel, setting the angle of each path as pi/K, and expanding an unknown pixel region to a KxK rectangular region, wherein the initial angle of the path changes periodically;
s52: recording the image space and color space distance between the foreground pixel and the unknown point and the image space and color space distance between the background pixel and the unknown point when the foreground pixel or the background pixel is encountered for the first time in each path, and stopping searching the foreground and background pixel points when the path exceeds the edge of the image;
s53: converting the candidate points into color samples; calculating the difference between the colors of the sample point and the unknown point; calculating the number of pixel mutation on a straight-line path from an unknown point to a foreground point and a background point of the sample; calculating the physical distance between the sampled foreground point and background point and the unknown point; performing minimum optimization combination on foreground points and background points to find out sampling pixels with optimal combination;
s54: carrying out data processing on the sampling pixels of the optimal combination to obtain a minimum pair of foreground and background data pairs, and defining the data pairs as optimal sampling points;
s55: setting a radius threshold value for unknown pixel points in a gray area in the trimap image, counting each unknown pixel point in the radius threshold value to obtain three minimum MP values, obtaining related color data of the three unknown pixel points corresponding to the three minimum MP values, weighting and averaging the related color data to obtain data T, and defining the data T as an optimal sampling point;
s56: and acquiring new foreground pixels, background pixels, transparency and confidence coefficient information according to the optimal sampling points to obtain a black and white mask of the region of interest.
In the step S6, the three-segment map of the next picture is obtained by using the flooding algorithm in the following specific manner:
s61: reading position information and color value information of all pixels of the foreground skeleton, and setting loDiff1, updDiff 1, loDiff2 and updDiff 2 values;
s62: selecting a pixel point Q in the foreground skeleton, and searching from neighbor pixel points of four neighbor domains of the pixel point Q: if the color negative difference between the current pixel point Q and the neighbor pixels of the current pixel point Q is less than loDiff1, and the color positive difference between the current pixel point Q and the neighbor pixels of the current pixel point Q is less than upDiff1, adding the pixel point Q into the selected set until no pixel meets the conditions in the iteration process, and marking the obtained area as an area D1;
s63: selecting a pixel point W in a foreground framework, starting searching from neighbor pixel points of four neighborhoods of the pixel point W, if the color negative difference between the current pixel point W and the neighbor pixels is less than loDiff2, and the color positive difference between the current pixel point W and the neighbor pixels is less than upDiff2, adding the pixel point W into a selected set until no pixel meets the condition in an iteration process, and marking an obtained area as an area D2;
s64: the area D1 is defined as a white portion in the three-segment diagram, a portion of the area D2 larger than the area D1 is defined as a gray portion, and the remaining portion is defined as a black portion.
Due to the adoption of the technical scheme, the automatic segmentation method for the human body slice image based on the shared matting and ROI gradual change, which is provided by the invention, adopts the skeleton to determine the foreground part of the next image, and compared with a seed point method, the method disclosed by the application has the advantages that the number of missing parts of the image is less, and the segmentation fineness is high; the black and white mask of the region of interest is obtained by adopting a shared matting algorithm, so that the calculation amount of a segmentation algorithm is reduced, the resource consumption is reduced, and the segmentation speed is improved; the image segmentation method has the advantages that the automation degree of image segmentation is improved, manual intervention is less in the image segmentation process, the operation is more flexible, and the extraction precision is improved, so that the image of the human organ can be accurately and quickly extracted, and powerful technical support is provided for subsequent organ three-dimensional modeling and further clinical application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of an image to be segmented according to the present invention;
FIG. 3 is a skeleton of a hand-drawn marker of a foreground image of the present invention;
FIG. 4 is a partial region within the region of interest of the present invention;
FIG. 5 is an area of the present invention that completely covers a region of interest;
FIG. 6 is a trisection view of the present invention for identifying approximate areas for segmentation;
FIG. 7 is a schematic diagram of setting k paths according to the present invention;
FIG. 8 is a black and white mask of a region of interest obtained by a segmentation of the present invention;
FIG. 9 is a foreground skeleton obtained by the refinement algorithm of the present invention;
FIG. 10 is a gradually changing picture to be segmented according to the present invention;
FIG. 11 shows a progressive foreground black and white mask of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following makes a clear and complete description of the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention:
the method for automatically segmenting the human body slice image based on the shared matting and ROI gradual change as shown in FIG. 1 specifically comprises the following steps:
s1: reading an interested area of the image, and roughly scrawling the interested area to obtain a skeleton part, wherein the first image to be segmented is shown in fig. 2, and the hand-painted scrawling part is shown in fig. 3.
S2: the inner region of interest a is suitably chosen, as shown in fig. 4, so that it contains most of the region of interest. Finding out the color difference range between the region and the graffiti pixel, and recording the maximum value of the color negative difference loDiff1 between the graffiti pixel and the region pixel and the maximum value of the color positive difference upDiff1 between the graffiti pixel and the region pixel;
s3: covering the interested area and reading the boundary area of the interested area, defining the sum of the internal area A and the boundary area as an area B, searching the color difference range of the area B and the graffiti pixel, and recording the maximum value of negative color difference loDiff2 between the graffiti pixel and the area B pixel and the maximum value of positive color difference Difupf 2 between the graffiti pixel and the area B pixel;
s4: defining an internal area A as a white part, defining a part of an area B which is more than the area A as a gray part, and defining an area which is not contained in the area B as a black part to obtain a whole picture trisection image; namely, a trisection map is generated according to the foreground rough range determined by the S2 and the S3. The area selected by S2 is the white part of the trimap image, the part selected by S3 minus the part selected by S2 is the gray part of the trimap image, and the rest part is the black part of the trimap image. A trimap image was obtained, which is shown in fig. 6.
S5: obtaining a black-and-white mask of the region of interest by adopting a shared matting algorithm according to the trimap image information, and obtaining a foreground skeleton of the black-and-white mask by adopting a thinning algorithm; and the skeleton is trimmed appropriately.
S6: taking the foreground framework as the foreground scrawling of the next picture, and obtaining a trisection map of the next picture by adopting a flooding algorithm according to the data of loDiff1, upDiff1, loDiff2 and upDiff2 obtained in S2 and S3;
s7: and (5) circulating S5 to S6 until all the pictures are processed to obtain black and white masks of all the picture interested areas. The gradually changing picture to be segmented and the gradually changing foreground black and white mask are shown in fig. 10 and 11.
Further, the black and white mask of the region of interest obtained by the shared matting algorithm according to the trimap image information specifically adopts the following method: for the above preprocessed serialized image, the region in S2 is expanded in small scale by circulating from pixel to outside to reduce the number of unknown regions and reduce the amount of calculation in later period
S51: setting a maximum quantity parameter K of foreground pixels and background pixels, extracting K paths from each unknown pixel, setting the angle of each path as pi/K, and expanding an unknown pixel region to a K multiplied by K rectangular region, wherein the initial angle of the path changes periodically as shown in FIG. 7;
s52: recording the image space and color space distance between the foreground pixel and the unknown point and the image space and color space distance between the background pixel and the unknown point when the foreground pixel or the background pixel is encountered for the first time in each path, and stopping searching the foreground and background pixel points when the path exceeds the edge of the image;
s53: converting the candidate points into color samples, calculating the difference between the colors of the sample points and the unknown points, and calculating the times of pixel mutation on a straight line path from the unknown points to the foreground points and the background points of the sample; calculating the physical distance between the sampled foreground point and background point and the unknown point; and performing minimum optimization combination on the foreground points and the background points to find out the sampling pixel with the optimal combination.
The specific calculation process of S52 and S53 is: converting the candidate points into color samples through formula (1); calculating the color difference between the sample point and the unknown point through a formula (2); calculating the number of pixel mutation on a straight line path from an unknown point to a foreground point and a background point of the sampling by using a formula (3); calculating the physical distance between the unknown point and the sampled foreground point or background point and the unknown point by the formula (4); and (4) minimizing the optimized combination through the formula (5) to find the sampling pixel of the optimal combination.
To avoid as few abrupt pixel changes as possible, taking into account the unknown points of the straight-line path between the foreground point and the background point of the sample, a formula is used to simplify:
the pixel colors integrated along the resulting path are selected.
Simultaneously minimizing optimal combination:
combining these four conditions, the following objective function is finally obtained:
the previously sampled data is processed and the set of combinations that minimize the objective function values is recorded and the best sample point is initially determined: e.g. of a cylinder N =3,e A =2,e f =1,e b =4
S54: carrying out data processing on the sampling pixels with the optimal combination to obtain a minimum pair of foreground and background data pairs, and defining the data pairs as optimal sampling points;
s55: setting a radius threshold value for unknown pixel points in a gray area in the trimap image, counting each unknown pixel point in the radius threshold value by adopting a formula (2) to obtain three minimum MP values, obtaining relevant color data of the three unknown pixel points corresponding to the three minimum MP values, carrying out weighting and average processing on the relevant color data to obtain data T, and defining the data T as an optimal sampling point;
wherein the weighted and averaged processing formula is:
s56: and acquiring new foreground pixels, background pixels, transparency and confidence coefficient information according to the optimal sampling points to obtain a black-and-white mask of the region of interest. The black and white mask of the interested area of the picture is locally smoothed by Gaussian blur to reduce noise. The resulting black and white mask is shown in fig. 8.
Further, for a black-white binary result image obtained by a shared matting algorithm, a foreground skeleton is obtained by applying a refining algorithm, and the following method is specifically adopted:
p1, numbering eight neighborhoods of each foreground pixel according to a certain sequence, and setting the foreground pixel as P 1 The pixel right above is P 2 The remaining pixels are numbered clockwise. The refinement process of extracting the skeleton is divided into two sub-processes to ensure the integrity of the skeleton.
P2 refinement procedure P1, if the foreground pixel P under investigation 1 If the neighborhood pixels meet the following conditions, converting the neighborhood pixels into background pixels;
(a)2≤B(P 1 )≤6;(b)A(P 1 )=1;(c)P 2 ×P 4 ×P 6 =0;(d)P 4 ×P 6 ×P 8 =0
wherein A (P) 1 ) Is in the order of the numbers { P } 2 ,P 3 ,…P 9 The number of times that the pixel point is changed from 0 to 1, B (P) 1 ) Is P 1 Is determined.
And P3, further refining: wherein the conditions (a), (b) are unchanged, (c), (d) are changed as follows:
(e)P 2 ×P 4 ×P 8 =0 (f)P 2 ×P 6 ×P 8 =0
if the foreground pixel under investigation satisfies the conditions (a) (b) (e) (f), it is converted into a background pixel.
P4. Loop P1 through P3 until no more foreground pixels are replaced with background pixels and the refinement process ends. The resulting foreground skeleton is shown in fig. 9.
Further, the foreground skeleton in S6 is used as a foreground scribble of the next picture, and a flooding algorithm is used to obtain a ternary diagram of the next picture according to the data of loDiff1, upDiff1, loDiff2 and upDiff2 obtained in S2 and S3, specifically the following method is adopted:
s61: and obtaining the positions and color values of all pixels of the skeleton. And setting the pixels where all the skeletons are as foreground pixels.
S62: setting loDiff1 and upDiff1, loDiff2 and upDiff2 values;
s63: selecting a pixel point Q in the foreground framework, starting to search from neighbor pixel points V in four neighborhoods of the selected pixel point Q, if the color negative difference between the current pixel point Q and the neighbor pixel points V is smaller than loDiff1, and observing that the color positive difference between the current pixel point Q and the neighbor pixel points V is smaller than upDiff1, adding the pixel point Q and the neighbor pixel points V into the selected set.
S64: and circulating in this way until the selected set is not increased any more, namely until no more pixels in the iterative process meet the condition, and recording the obtained region as D1.
S65: clearing foreground pixels, adding pixel points in the skeleton into the foreground pixels again, selecting the pixel points W from the foreground pixel set again, starting searching in neighbor pixel points in four neighborhoods of the pixel points W, and selecting and adding the pixel points into the selected set if the color negative difference between the current pixel point W and the neighbor pixel points is smaller than loDiff2 and the color positive difference between the current pixel point W and the neighbor pixel points is smaller than upDiff 2.
S66: and circulating in the way until the selected set is not increased any more until no more pixels meet the condition in the iterative process, and recording the obtained region as D2.
S67: d1 is a white part in the three-segment image, the part of the area D2, which is larger than the area D1, is a gray part in the three-segment image, and the rest part is a black part of the three-segment image, so that the three-segment image is obtained and used for next image segmentation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered as the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.
Claims (2)
1. A human body slice image automatic segmentation method based on shared matting and ROI gradual change is characterized by comprising the following steps:
s1: reading an interested area of the image, and roughly scrawling the interested area;
s2: selecting an internal area A of the region of interest, calculating a color difference range between the internal area A and a graffiti pixel, and recording a maximum color negative difference value loDiff1 between the graffiti pixel and the internal area A pixel and a maximum color positive difference value upDiff1 between the graffiti pixel and the internal area A pixel;
s3: covering the interesting area, reading the boundary area of the interesting area, defining the sum of the internal area A and the boundary area as an area B, searching the color difference range of the area B and the scrawled pixels, and recording the maximum value loDiff2 of the color negative difference between the scrawled pixels and the pixels of the area B and the maximum value diff2 of the color positive difference between the scrawled pixels and the pixels of the area B;
s4: defining an internal area A as a white part, defining a part of an area B which is more than the area A as a gray part, and defining an area which is not contained in the area B as a black part, thereby obtaining a three-part picture of the whole picture;
s5: obtaining a black-and-white mask of the region of interest by adopting a shared matting algorithm according to the trimap image information, and obtaining a foreground framework of the black-and-white mask by adopting a parallel refinement algorithm;
s6: taking the foreground framework as the foreground graffiti of the next picture, and obtaining a trimap image of the next picture by adopting a flooding algorithm according to the data of loDiff1, and upDiff1, loDiff2 and upDiff2 obtained in S2 and S3;
s7: circulating S5 to S6 until all the pictures are processed to obtain black and white masks of the interested areas of all the pictures;
in the step S6, the three-segment map of the next picture is obtained by using the flooding algorithm in the following specific manner:
s61: reading position information and color value information of all pixels of the foreground skeleton, and setting loDiff1, upDiff1, loDiff2 and upDiff2 values;
s62: selecting a pixel point Q in the foreground skeleton, and searching from neighbor pixel points of four neighbor domains of the pixel point Q: if the color negative difference between the current pixel point Q and the neighbor pixels of the current pixel point Q is less than loDiff1, and the color positive difference between the current pixel point Q and the neighbor pixels of the current pixel point Q is less than upDiff1, adding the pixel point Q into the selected set until no pixel meets the conditions in the iteration process, and marking the obtained area as an area D1;
s63: selecting a pixel point W in a foreground framework, starting to search from neighbor pixel points in four neighborhoods of the pixel point W, if the color negative difference between the current pixel point W and the neighbor pixel thereof is less than loDiff2, and the color positive difference between the current pixel point W and the neighbor pixel thereof is less than upDiff2, adding the pixel point W into a selected set until no pixel meets the condition in an iteration process, and marking an obtained area as an area D2;
s64: the area D1 is defined as a white portion in the three-segment diagram, a portion of the area D2 larger than the area D1 is defined as a gray portion, and the remaining portion is defined as a black portion.
2. The method for automatically segmenting the human body slice image based on the shared matting and ROI gradual change as claimed in claim 1, further characterized in that: in the step S5, the black-and-white mask of the region of interest is obtained by adopting a shared matting algorithm according to the trimap image information in the following specific way:
s51: setting a maximum quantity parameter K of foreground pixels and background pixels, extracting K paths from each unknown pixel, setting the angle of each path as pi/K, and expanding an unknown pixel region to a K multiplied by K rectangular region, wherein the initial angle of the path is changed periodically;
s52: recording the image space and color space distance between the foreground pixel and the unknown point and the image space and color space distance between the background pixel and the unknown point when the foreground pixel or the background pixel is encountered for the first time in each path, and stopping searching the foreground and background pixel points when the path exceeds the edge of the image;
s53: converting the candidate points into color samples; calculating the difference between the colors of the sample point and the unknown point; calculating the number of pixel mutation on a straight-line path from an unknown point to a foreground point and a background point of the sample; calculating the physical distance between the sampled foreground point and background point and the unknown point; performing minimum optimization combination on foreground points and background points to find out sampling pixels with optimal combination;
s54: carrying out data processing on the sampling pixels with the optimal combination to obtain a minimum pair of foreground and background data pairs, and defining the data pairs as optimal sampling points;
s55: setting a radius threshold value for unknown pixel points in a gray area in the trimap image, counting each unknown pixel point in the radius threshold value to obtain three minimum MP values, obtaining related color data of the three unknown pixel points corresponding to the three minimum MP values, weighting and averaging the related color data to obtain data T, and defining the data T as an optimal sampling point;
s56: and acquiring new foreground pixels, background pixels, transparency and confidence coefficient information according to the optimal sampling points to obtain a black-and-white mask of the region of interest.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910600636.2A CN110288617B (en) | 2019-07-04 | 2019-07-04 | Automatic human body slice image segmentation method based on shared matting and ROI gradual change |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910600636.2A CN110288617B (en) | 2019-07-04 | 2019-07-04 | Automatic human body slice image segmentation method based on shared matting and ROI gradual change |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288617A CN110288617A (en) | 2019-09-27 |
CN110288617B true CN110288617B (en) | 2023-02-03 |
Family
ID=68020587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910600636.2A Active CN110288617B (en) | 2019-07-04 | 2019-07-04 | Automatic human body slice image segmentation method based on shared matting and ROI gradual change |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288617B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462027B (en) * | 2020-03-12 | 2023-04-18 | 中国地质大学(武汉) | Multi-focus image fusion method based on multi-scale gradient and matting |
CN112101370B (en) * | 2020-11-11 | 2021-08-24 | 广州卓腾科技有限公司 | Automatic image matting method for pure-color background image, computer-readable storage medium and equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697229A (en) * | 2009-10-30 | 2010-04-21 | 宁波大学 | Method for extracting region of interest of medical image |
CN104008547A (en) * | 2014-05-28 | 2014-08-27 | 大连理工大学 | Method for visible serial segmentation of human body slice images based on skeleton angular points |
CN106875399A (en) * | 2017-01-04 | 2017-06-20 | 努比亚技术有限公司 | A kind of method for realizing interactive image segmentation, device and terminal |
CN106875412A (en) * | 2017-02-28 | 2017-06-20 | 重庆理工大学 | A kind of two segmentation localization methods of overlap fruit based on aberration |
CN107452010A (en) * | 2017-07-31 | 2017-12-08 | 中国科学院长春光学精密机械与物理研究所 | A kind of automatically stingy nomography and device |
CN107730528A (en) * | 2017-10-28 | 2018-02-23 | 天津大学 | A kind of interactive image segmentation and fusion method based on grabcut algorithms |
CN108564528A (en) * | 2018-04-17 | 2018-09-21 | 福州大学 | A kind of portrait photo automatic background weakening method based on conspicuousness detection |
CN108986109A (en) * | 2018-06-27 | 2018-12-11 | 大连理工大学 | A kind of serializing viewing human sectioning image automatic division method |
CN108986107A (en) * | 2018-06-15 | 2018-12-11 | 大连理工大学 | The serializing viewing human sectioning image automatic division method scribbled based on spectrum analysis and skeleton |
-
2019
- 2019-07-04 CN CN201910600636.2A patent/CN110288617B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697229A (en) * | 2009-10-30 | 2010-04-21 | 宁波大学 | Method for extracting region of interest of medical image |
CN104008547A (en) * | 2014-05-28 | 2014-08-27 | 大连理工大学 | Method for visible serial segmentation of human body slice images based on skeleton angular points |
CN106875399A (en) * | 2017-01-04 | 2017-06-20 | 努比亚技术有限公司 | A kind of method for realizing interactive image segmentation, device and terminal |
CN106875412A (en) * | 2017-02-28 | 2017-06-20 | 重庆理工大学 | A kind of two segmentation localization methods of overlap fruit based on aberration |
CN107452010A (en) * | 2017-07-31 | 2017-12-08 | 中国科学院长春光学精密机械与物理研究所 | A kind of automatically stingy nomography and device |
CN107730528A (en) * | 2017-10-28 | 2018-02-23 | 天津大学 | A kind of interactive image segmentation and fusion method based on grabcut algorithms |
CN108564528A (en) * | 2018-04-17 | 2018-09-21 | 福州大学 | A kind of portrait photo automatic background weakening method based on conspicuousness detection |
CN108986107A (en) * | 2018-06-15 | 2018-12-11 | 大连理工大学 | The serializing viewing human sectioning image automatic division method scribbled based on spectrum analysis and skeleton |
CN108986109A (en) * | 2018-06-27 | 2018-12-11 | 大连理工大学 | A kind of serializing viewing human sectioning image automatic division method |
Non-Patent Citations (5)
Title |
---|
Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis;Howard Chung等;《SPIE:Medical Imaging2009: Visualization, Image-Guided Procedures, and Modeling》;20090313;197-204 * |
基于OneCut和共享抠图算法的自适应衣物目标抠取;孟蕊等;《智能计算机与应用》;20151001;第5卷(第5期);84-88 * |
基于人体切片的图像分割技术研究;刘茂奇;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20060915;I138-658 * |
基于图像抠图技术的多聚焦图像融合方法;张盛林等;《计算机应用》;20160710(第07期);1949-1953 * |
基于数字抠图的ROI提取方式改进及其在CT图像中的应用;彭莎等;《中国生物医学工程学会成立30周年纪念大学暨2010中国生物医学工程学会学术大会论文集》;20110620;462-464 * |
Also Published As
Publication number | Publication date |
---|---|
CN110288617A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112017191B (en) | Liver pathology image segmentation model establishment and segmentation method based on attention mechanism | |
CN111145209B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
WO2018107939A1 (en) | Edge completeness-based optimal identification method for image segmentation | |
US7245766B2 (en) | Method and apparatus for determining a region in an image based on a user input | |
CN109460735B (en) | Document binarization processing method, system and device based on graph semi-supervised learning | |
CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
CN111709929B (en) | Lung canceration region segmentation and classification detection system | |
CN111462076A (en) | Method and system for detecting fuzzy area of full-slice digital pathological image | |
CN110288617B (en) | Automatic human body slice image segmentation method based on shared matting and ROI gradual change | |
CN115331245B (en) | Table structure identification method based on image instance segmentation | |
JP2013536960A (en) | System and method for synthesizing portrait sketches from photographs | |
CN109360191B (en) | Image significance detection method based on variational self-encoder | |
CN112348059A (en) | Deep learning-based method and system for classifying multiple dyeing pathological images | |
CN108986109B (en) | Automatic segmentation method for serialized visible human body slice images | |
CN111292315A (en) | Rapid registration algorithm for pathological section tissue area | |
CN111210447B (en) | Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal | |
CN113160185A (en) | Method for guiding cervical cell segmentation by using generated boundary position | |
CN116645592A (en) | Crack detection method based on image processing and storage medium | |
CN112330561A (en) | Medical image segmentation method based on interactive foreground extraction and information entropy watershed | |
JP3330829B2 (en) | Automatic detection method of evaluable area in images of machine parts | |
CN110942467A (en) | Improved watershed image segmentation method based on PSO-FCM | |
CN111415350B (en) | Colposcope image identification method for detecting cervical lesions | |
CN109993756B (en) | General medical image segmentation method based on graph model and continuous stepwise optimization | |
CN110443817B (en) | Method for improving image segmentation precision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |