CN109285166B - Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network - Google Patents

Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network Download PDF

Info

Publication number
CN109285166B
CN109285166B CN201811097643.7A CN201811097643A CN109285166B CN 109285166 B CN109285166 B CN 109285166B CN 201811097643 A CN201811097643 A CN 201811097643A CN 109285166 B CN109285166 B CN 109285166B
Authority
CN
China
Prior art keywords
chromosome
detection
pure
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811097643.7A
Other languages
Chinese (zh)
Other versions
CN109285166A (en
Inventor
伍业峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811097643.7A priority Critical patent/CN109285166B/en
Publication of CN109285166A publication Critical patent/CN109285166A/en
Application granted granted Critical
Publication of CN109285166B publication Critical patent/CN109285166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)

Abstract

The invention discloses a full convolution network-based automatic segmentation method for overlapped and conglutinated chromosomes, which comprises the following steps: comparing and training the artificially marked pure color chart and the original image by using a full convolution network, comparing and training the artificially marked pure color line chart and the original image, and generating a first training library and a second training library; reading in an original chromosome image; carrying out first detection on the chromosome picture by using a first training library to generate and store a first detection picture; performing secondary detection on the chromosome picture by using a second training library to generate and store a second detection picture; performing pixel processing on the first detection image and the second detection image; carrying out contour detection on the processed picture; the chromosome pictures of all the contour areas are cut by utilizing the position coordinate data corresponding to the detected contour areas, and overlapped and adhered chromosomes are quickly and effectively segmented through a full convolution network, so that the aims of efficiently and automatically segmenting the overlapped and adhered chromosomes, having wide coverage range and reducing the workload and difficulty of manual segmentation are fulfilled.

Description

Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network
Technical Field
The invention relates to the technical field of automatic segmentation of overlapping and conglutinated chromosomes, in particular to an automatic segmentation method of overlapping and conglutinated chromosomes based on a full convolution network.
Background
The artificial chromosome analysis is a very heavy work, medical staff must reproduce the microscopic picture of the chromosome by eyes and hand painting, and a series of work such as comparison, analysis, pairing, sequencing, classification and the like wastes a great deal of manpower and material resources. Some chromosome automatic analysis systems developed at home at present are erected on a microscope through a high-definition camera, the camera captures images under the microscope from the microscope, the images are connected to a special computer through a data line, the computer runs chromosome analysis software, and the images are displayed in the software. Although the chromosome can be automatically segmented, the chromosome overlapping and the segmentation of the conglutinated image are difficult, and although the comparison document with the application number of 200810218815.1 discloses a segmentation method of the chromosome overlapping and the conglutinated image, the comparison document only segments the X-type overlapping and conglutinated chromosomes, and can not effectively segment the Y-type and polytype overlapping chromosomes.
Disclosure of Invention
In order to solve the technical problems, the invention provides an automatic overlapped and adhered chromosome segmentation method based on a full convolution network, so as to achieve the purposes of efficiently and automatically segmenting overlapped and adhered chromosomes, having a wide coverage range and reducing the workload and difficulty of manual segmentation.
In order to achieve the purpose, the technical scheme of the invention is as follows: a full convolution network based method for automatically segmenting overlapping and conglutinating chromosomes, the method comprising the steps of:
step 1: manually covering chromosome image areas on the original image before the plurality of chromosomes are segmented with pure colors to generate a plurality of first pure color images; comparing the plurality of first pure color images with original images corresponding to the first pure color images to form a plurality of first training sample data; training a plurality of first training sample data by using a full convolution network algorithm to generate a first training library;
step 2: manually carrying out pure-color line segmentation on overlapped and adhered chromosomes in the original image before the multiple chromosomes are segmented to generate multiple first line segmentation images; comparing the plurality of first line segmentation graphs with original graphs corresponding to the first line segmentation graphs to form a plurality of second training sample data; training a plurality of second training sample data by using a full convolution network algorithm to generate a second training library;
and step 3: reading in an original chromosome image;
and 4, step 4: the full convolution network algorithm uses a first training library to carry out first detection on the read chromosome picture, calculates the positions of all chromosome pixel points, marks all the pixel points with pure colors, generates a first detection graph and stores the first detection graph;
and 5: the full convolution network algorithm uses a second training library to carry out second detection on the read chromosome picture, calculates the positions of all pixel points at the overlapped and adhered chromosome segmentation positions, marks all the pixel points by pure color lines, generates a second detection graph and stores the second detection graph;
step 6: performing pixel processing on the first detection image and the second detection image to generate a chromosome position image;
and 7: carrying out contour detection on the chromosome position map to obtain position coordinate data corresponding to each contour area;
and 8: and 7, cutting chromosome pictures of all the contour areas from the chromosome original image by using the position coordinate data corresponding to all the contour areas obtained in the step 7.
Preferably, the pure colors used for marking in step 1, step 2, step 4 and step 5 are all pure blue.
Preferably, in the step 1, comparing the plurality of first pure color maps with the original image to form a plurality of first training sample data is to compare common features of the first pure color maps and the original image to obtain chromosome model features; in the step 2, the plurality of first line segmentation graphs are compared with the original graph to form a plurality of second training sample data, and the common characteristics of the first line segmentation graphs and the original graph are compared to obtain the segmentation line model characteristics.
Preferably, the specific method for generating the chromosome position map by performing pixel processing on the first detection map and the second detection map includes: setting the pure-color pixel points in the first detection graph to be pure white, and setting the rest pixel points on the first detection graph to be pure black; and scanning pixel points of the picture generated in the last step, setting the pixel points with the same positions as the pure-color pixel points in the second detection image as pure black to obtain a chromosome position image, and improving the contrast between the chromosome region and other region colors, thereby facilitating chromosome cutting.
Preferably, the chromosome position map is a picture in which the chromosome position is pure white and the remaining positions are pure black.
Preferably, after the chromosome images of all contour regions are cut out from the chromosome original image, all the chromosome images are sharpened and edge-enhanced, and each processed chromosome image is stored.
The invention has the following advantages:
(1) The method can rapidly and effectively segment the overlapped and adhered chromosomes through the full convolution network, can cover the chromosomes in various shapes, reduces the difficulty and the workload of manual segmentation, and improves the segmentation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a method for automatically segmenting overlapping and conglutinated chromosomes based on a full convolution network according to an embodiment of the present invention;
FIG. 2 is an original image before chromosome segmentation according to an embodiment of the present invention;
FIG. 3 is a first pure color chart disclosed in an embodiment of the present invention;
FIG. 4 is a cut-away view of a first string disclosed in accordance with an embodiment of the present invention;
FIG. 5 is a first disclosed detection diagram of an embodiment of the present invention;
FIG. 6 is a second disclosed detection diagram of an embodiment of the present invention;
FIG. 7 is a map of chromosome positions as disclosed in an embodiment of the present invention;
FIG. 8 is a chromosome map disclosed in an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The invention provides a full convolution network-based automatic overlapped and adhered chromosome segmentation method, which has the working principle that overlapped and adhered chromosomes are rapidly and effectively segmented through a full convolution network so as to achieve the purposes of efficiently and automatically segmenting the overlapped and adhered chromosomes, having wide coverage range and reducing the workload and difficulty of manual segmentation.
The present invention will be described in further detail with reference to examples and specific embodiments.
As shown in fig. 1-8, a method for automatically segmenting overlapped and conglutinated chromosomes based on a full convolution network includes the following steps:
step 1: manually covering chromosome image regions on an original image (shown in FIG. 2) before the division of the plurality of chromosomes to generate a plurality of first pure color maps (shown in FIG. 3); comparing the plurality of first pure color images with original images corresponding to the first pure color images to form a plurality of first training sample data; training a plurality of first training sample data by using a full convolution network algorithm to generate a first training library;
step 2: manually performing pure-color line segmentation on overlapped and adhered chromosomes in the original image before the plurality of chromosomes are segmented to generate a plurality of first line segmentation maps (as shown in FIG. 4); comparing the plurality of first line segmentation graphs with original graphs corresponding to the first line segmentation graphs to form a plurality of second training sample data; training a plurality of second training sample data by using a full convolution network algorithm to generate a second training library;
and step 3: reading in an original chromosome image;
and 4, step 4: the full convolution network algorithm uses a first training library to perform first detection on the read chromosome picture, calculates the positions of all chromosome pixel points, marks all the pixel points with pure colors, generates a first detection graph (shown in figure 5) and stores the first detection graph;
and 5: the full convolution network algorithm uses a second training library to perform second detection on the read chromosome picture, calculates the positions of all pixel points at the overlapped and adhered chromosome segmentation positions, marks all the pixel points by pure color lines, generates a second detection graph (shown in figure 6) and stores the second detection graph;
step 6: performing pixel processing on the first detection image and the second detection image to generate a chromosome position image (shown in FIG. 7);
and 7: carrying out contour detection on the chromosome position map to obtain position coordinate data corresponding to each contour region;
and 8: the chromosome images of all the contour regions are cut out from the original chromosome image by using the position coordinate data corresponding to all the contour regions obtained in step 7 (as shown in fig. 8).
Wherein, the pure colors used for marking in the step 1, the step 2, the step 4 and the step 5 are all pure blue.
Wherein, in the step 1, comparing the plurality of first pure color maps with the original drawing to form a plurality of first training sample data, and comparing the common characteristics of the first pure color maps and the original drawing to obtain the chromosome model characteristics; in the step 2, the plurality of first line segmentation graphs are compared with the original graph to form a plurality of second training sample data, and the common characteristics of the first line segmentation graphs and the original graph are compared to obtain the segmentation line model characteristics.
The specific method for generating the chromosome position map by performing pixel processing on the first detection map and the second detection map comprises the following steps: setting the pure-color pixel points in the first detection graph to be pure white, and setting the rest pixel points on the first detection graph to be pure black; and scanning pixel points of the picture generated in the last step, and setting all the pixel points with the same positions as the pure-color pixel points in the second detection image as pure black to obtain a chromosome position image, so that the contrast between the chromosome region and other region colors is improved, and the chromosome cutting is facilitated.
The method is characterized in that the chromosome position picture is a picture with a chromosome position being pure white and the rest positions being pure black.
After chromosome pictures of all contour regions are cut out from the chromosome original image, sharpening and edge enhancing are carried out on all chromosome pictures, and each processed chromosome picture is stored.
The data generated in the process of splitting overlapped and conglutinated chromosomes can be trained again, and the cutting range of the overlapped and conglutinated chromosomes is further improved.
The above description is only the preferred embodiment of the method for automatically segmenting overlapped and conglutinated chromosomes based on the full convolution network disclosed by the present invention, and it should be noted that, for those skilled in the art, many variations and modifications can be made without departing from the inventive concept of the present invention, and these variations and modifications are within the scope of the present invention.

Claims (2)

1. A full convolution network-based automatic segmentation method for overlapped and conglutinated chromosomes is characterized by comprising the following steps:
step 1: manually covering chromosome image areas on the original image before the plurality of chromosomes are segmented with pure colors to generate a plurality of first pure color images; comparing the first pure color images with original images corresponding to the first pure color images to form a plurality of first training sample data; training a plurality of first training sample data by using a full convolution network algorithm to generate a first training library, wherein the plurality of first pure color maps are compared with the original image to form a plurality of first training sample data, and the plurality of first training sample data are the common characteristics of the first pure color maps and the original image, so that the chromosome model characteristics are obtained;
step 2: manually carrying out pure-color line segmentation on overlapped and adhered chromosomes in the original image before the multiple chromosomes are segmented to generate multiple first line segmentation images; comparing the plurality of first line segmentation graphs with original graphs corresponding to the first line segmentation graphs to form a plurality of second training sample data; training a plurality of second training sample data by using a full convolution network algorithm to generate a second training library, wherein the plurality of first line segmentation graphs are compared with the original graph to form a plurality of second training sample data, and the second training sample data are the common characteristics of the first line segmentation graphs and the original graph to obtain the characteristics of the segmentation line model;
and step 3: reading in an original chromosome image;
and 4, step 4: the full convolution network algorithm uses a first training library to carry out first detection on the read chromosome picture, calculates the positions of all chromosome pixel points, marks all the pixel points with pure colors, generates a first detection graph and stores the first detection graph;
and 5: the full convolution network algorithm uses a second training library to carry out second detection on the read chromosome picture, calculates the positions of all pixel points at the overlapped and adhered chromosome segmentation positions, marks all the pixel points by pure color lines, generates a second detection graph and stores the second detection graph;
step 6: and performing pixel processing on the first detection image and the second detection image to generate a chromosome position image, wherein the specific method comprises the following steps: setting the pure-color pixel points in the first detection graph to be pure white, and setting the rest pixel points on the first detection graph to be pure black; scanning pixel points of the picture generated in the last step, and setting all the pixel points with the same positions as the pure-color pixel points in the second detection image to be pure black to obtain a chromosome position image, wherein the chromosome position image is a picture with pure white chromosome positions and pure black rest positions;
and 7: carrying out contour detection on the chromosome position map to obtain position coordinate data corresponding to each contour region;
and 8: cutting chromosome pictures of all contour areas from the chromosome original image by using the position coordinate data corresponding to all contour areas obtained in the step 7;
the pure colors used for marking in the step 1, the step 2, the step 4 and the step 5 are all pure blue.
2. The method for automatically segmenting overlapped and conglutinated chromosomes based on the full convolution network as claimed in claim 1, wherein after chromosome images of all contour regions are cut out from a chromosome original image, all chromosome images are sharpened and edge-enhanced, and each processed chromosome image is saved.
CN201811097643.7A 2018-09-20 2018-09-20 Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network Active CN109285166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811097643.7A CN109285166B (en) 2018-09-20 2018-09-20 Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811097643.7A CN109285166B (en) 2018-09-20 2018-09-20 Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network

Publications (2)

Publication Number Publication Date
CN109285166A CN109285166A (en) 2019-01-29
CN109285166B true CN109285166B (en) 2023-03-31

Family

ID=65181597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811097643.7A Active CN109285166B (en) 2018-09-20 2018-09-20 Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network

Country Status (1)

Country Link
CN (1) CN109285166B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009619A (en) * 2019-04-02 2019-07-12 清华大学深圳研究生院 A kind of image analysis method based on fluorescence-encoded liquid phase biochip

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520890B (en) * 2008-12-31 2011-04-20 广东威创视讯科技股份有限公司 Grey scale characteristic graph-based automatic separation method for conglutinated chromosomes
CN106296635B (en) * 2015-05-29 2019-11-22 厦门鹭佳生物科技有限公司 A kind of fluorescence in situ hybridization (FISH) image Parallel Processing and analysis method
CN106529555B (en) * 2016-11-04 2019-12-06 四川大学 DR (digital radiography) sheet lung contour extraction method based on full convolution network
CN107016681B (en) * 2017-03-29 2023-08-25 浙江师范大学 Brain MRI tumor segmentation method based on full convolution network
CN108550133B (en) * 2018-03-02 2021-05-18 浙江工业大学 Cancer cell detection method based on fast R-CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Crowdsourcing for Chromosome Segmentation and Deep Classification;Monika Sharma等;《Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR)》;20170824;全文 *
染色体自动分析***的研究现状及未来目标;闫文忠;《中国组织工程研究与临床康复》;20090326(第13期);全文 *

Also Published As

Publication number Publication date
CN109285166A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN108389257B (en) Generating a three-dimensional model from a scanned object
WO2019113572A1 (en) Computer vision systems and methods for geospatial property feature detection and extraction from digital images
CN110473221B (en) Automatic target object scanning system and method
CN112614062B (en) Colony counting method, colony counting device and computer storage medium
CN104143094A (en) Test paper automatic test paper marking processing method and system without answer sheet
CN106709492A (en) Examination paper image processing method and device, and computer readable storage medium
CN110188495B (en) Method for generating three-dimensional house type graph based on two-dimensional house type graph of deep learning
CN111311487B (en) Rapid splicing method and system for photovoltaic module images
CN104597057A (en) Columnar diode surface defect detection device based on machine vision
US20220375188A1 (en) Semi-automatic image data labeling method, electronic apparatus, and storage medium
US20220414892A1 (en) High-precision semi-automatic image data labeling method, electronic apparatus, and storage medium
CN115359239A (en) Wind power blade defect detection and positioning method and device, storage medium and electronic equipment
CN112381081A (en) Official seal character automatic identification method and device, computer equipment and storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113642576B (en) Method and device for generating training image set in target detection and semantic segmentation tasks
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN111027538A (en) Container detection method based on instance segmentation model
CN112712513A (en) Product defect detection method, device, equipment and computer readable storage medium
KR20230030259A (en) Deep learning-based data augmentation method for product defect detection learning
CN109741273A (en) A kind of mobile phone photograph low-quality images automatically process and methods of marking
CN115170518A (en) Cell detection method and system based on deep learning and machine vision
CN109285166B (en) Overlapping and conglutinating chromosome automatic segmentation method based on full convolution network
CN113947563A (en) Cable process quality dynamic defect detection method based on deep learning
CN113065400A (en) Invoice seal detection method and device based on anchor-frame-free two-stage network
CN111275756A (en) Spool positioning method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant