CN111627033A - Hard sample instance segmentation method and device and computer readable storage medium - Google Patents

Hard sample instance segmentation method and device and computer readable storage medium Download PDF

Info

Publication number
CN111627033A
CN111627033A CN202010480111.2A CN202010480111A CN111627033A CN 111627033 A CN111627033 A CN 111627033A CN 202010480111 A CN202010480111 A CN 202010480111A CN 111627033 A CN111627033 A CN 111627033A
Authority
CN
China
Prior art keywords
image
iou
sample
segmentation
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010480111.2A
Other languages
Chinese (zh)
Other versions
CN111627033B (en
Inventor
薛均晓
程君进
徐明亮
吕培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202010480111.2A priority Critical patent/CN111627033B/en
Publication of CN111627033A publication Critical patent/CN111627033A/en
Application granted granted Critical
Publication of CN111627033B publication Critical patent/CN111627033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, equipment and a computer readable storage medium for partitioning a hard sample instance, wherein the method comprises the following steps: preprocessing an image, wherein the image is used for enabling the foreground and the background of the image to be easily distinguished; the method comprises the following steps of (1) difficult sample segmentation of an image, wherein the difficult sample segmentation is used for distinguishing a positive sample and a negative sample in the image; and (4) performing convolution training on the image, wherein the convolution training is used for example segmentation of a difficult sample in the image. The invention preprocesses the original image, so that the foreground and the background of the image are easier to distinguish, and the boundary is clearer. And carrying out difficult sample segmentation on the preprocessed image, and improving the identification precision during convolution training. The convolution training can be performed in accordance with large-scale training samples to perform autonomous learning, and a large number of training samples are beneficial to activating deep network neurons and memorializing and analyzing the states of the target object under different colors, forms and environments.

Description

Hard sample instance segmentation method and device and computer readable storage medium
Technical Field
The invention relates to the field of computer vision and graphic image processing, in particular to a method and equipment for segmenting a hard sample instance and a computer readable storage medium.
Background
The method has the advantages that the accurate instance segmentation is carried out on the image data set, and the method plays a vital role in the fields of artificial intelligence such as unmanned driving, robots and virtual reality.
In the present example segmentation method, it can be divided into two categories: example segmentation algorithms based on traditional methods and example segmentation algorithms based on deep learning.
Example segmentation is realized based on a traditional method: conventional methods include region-based segmentation methods such as a method of segmenting an image into small image blocks having the same properties and a method of segmenting an image using a plurality of color spaces. There are also Markov Random Field (MRF) models, methods for segmenting images using indirect estimation stochastic processes generated by MFR, and Conditional Random Field (CRF) models are also often used as post-processing modules for semantic segmentation algorithms based on deep learning to refine the segmentation of images. But the conventional example segmentation method has low segmentation accuracy.
In the example segmentation method based on deep learning, mask rcnn replaces a RoIPooling layer with a RoI Align layer on the basis of master rcnn, and a mask branch is added, so that the evolution from target detection and target classification to implementation of an example segmentation technology is realized, the ROI Align technology does not use quantization operation, more accurate characteristic diagram information can be obtained by using a bilinear interpolation method, and errors caused by quantization operation during characteristic diagram acquisition are reduced. The simple ROI Align calculation only calculates the calculated pixel points and other pixel points in the matrix range of the convolution kernel size around the pixel points, so that the receptive field is always rectangular no matter how deep the network is, however, the shapes of a plurality of objects can be changed in reality, and the adaptability of the simple ROI Align calculation is low.
Disclosure of Invention
The invention mainly solves the technical problems of low segmentation precision and low adaptability, and provides a method, equipment and a computer-readable storage medium for segmenting a difficult sample instance, wherein the method, the equipment and the computer-readable storage medium have high segmentation precision and wide application range.
In order to solve the technical problems, the invention adopts a technical scheme that: a method of hard sample instance segmentation, comprising:
preprocessing an image, wherein the image is used for enabling the foreground and the background of the image to be easily distinguished;
the method comprises the following steps of (1) difficult sample segmentation of an image, wherein the difficult sample segmentation is used for distinguishing a positive sample and a negative sample in the image;
and (4) performing convolution training on the image, wherein the convolution training is used for example segmentation of a difficult sample in the image.
In another embodiment of the hard sample example segmentation method, the image data is preprocessed by utilizing a sharpening and clustering method.
In another embodiment of the hard sample instance segmentation method, the sharpening is laplacian sharpening, and the clustering is K-means clustering.
In another embodiment of the method for segmenting the hard sample example of the present invention, the hard sample segmentation of the image includes:
classifying the images, namely classifying the preprocessed images by using a classifier;
calculating an image IOU value, calculating the IOU value of the classified image, and setting an IOU threshold value;
and comparing the IOU value with a set IOU threshold value, and outputting the image with the IOU value larger than the IOU threshold value.
In another embodiment of the present hard sample instance partitioning method, the IOU threshold is 0.5.
In another embodiment of the method for segmenting the hard sample instance, the IOU value is compared with a set IOU threshold value, the image with the IOU value smaller than the IOU threshold value is input into the classifier again for classification, the IOU value is recalculated and compared with the IOU threshold value, the image with the IOU value larger than the IOU threshold value is output, and the image with the IOU value smaller than the IOU threshold value is repeatedly classified, the IOU value is calculated and the IOU threshold value is compared.
In another embodiment of the hard sample instance segmentation method, the image is convolution trained through a convolution neural network.
In another embodiment of the method for segmenting hard sample instances according to the present invention, the convolutional neural network comprises DeformableROI Align.
A difficult sample instance segmentation apparatus, comprising:
the image preprocessing module is used for enabling the foreground and the background of the image to be easily distinguished;
the image hard sample segmentation module is used for distinguishing a positive sample and a negative sample in the image;
and the convolution training module of the image is used for example segmentation of the difficult samples in the image.
A computer-readable storage medium for hard sample instance segmentation, the computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a hard sample instance segmentation method.
The invention has the beneficial effects that: the original image is preprocessed, so that the foreground and the background of the image are easier to distinguish, and the boundary is clearer. And carrying out difficult sample segmentation on the preprocessed image, and improving the identification precision during convolution training. The convolution training can be performed in accordance with large-scale training samples to perform autonomous learning, and a large number of training samples are beneficial to activating deep network neurons and memorializing and analyzing the states of the target object under different colors, forms and environments.
Drawings
FIG. 1 is a flow diagram of one embodiment of a hard sample instance partitioning method in accordance with the present invention;
FIG. 2 is a sharpening flow diagram according to an embodiment of the present invention hard sample instance segmentation method;
FIG. 3 is a clustering flow diagram according to an embodiment of the present invention hard sample instance segmentation method;
FIG. 4 is a hard sample segmentation flow diagram according to an embodiment of the hard sample instance segmentation method of the present invention;
FIG. 5 is a diagram of the convolutional neural network composition in accordance with an embodiment of the present invention hard sample instance segmentation method.
FIG. 6 is a composition diagram of a mask head and a mask iou head according to an embodiment of the inventive hard sample instance partitioning method.
FIG. 7 is a schematic block diagram of an embodiment of a hard sample example segmentation apparatus according to the present invention;
fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It is to be noted that, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In order to solve the problems of low segmentation precision and low adaptability, the invention provides a method for segmenting a difficult sample example, which is high in segmentation precision and wide in application range.
As shown in fig. 1, a method for segmenting a hard sample instance includes:
s10, preprocessing the image, and making the foreground and the background of the image easy to distinguish;
s20, performing hard sample segmentation on the image, and distinguishing a positive sample and a negative sample in the image;
and S30, performing convolution training on the image, and performing example segmentation on the difficult samples in the image.
As shown in fig. 2, the image data is preprocessed by sharpening and clustering. Preferably, the sharpening is laplacian sharpening and the clustering is K-means clustering.
When the foreground and the background of an image are not clear or lines of an object in the image are not obvious, the line sense of the foreground and the background and the line sense of example boundaries can be enhanced through sharpening, and the boundaries can be marked clearly when the example is segmented, so that a clearer sharpened image can be obtained.
The laplacian sharpening method comprises the following steps:
s101, performing Laplace transform on the original image to obtain a transformed image, and highlighting small details in the image.
And S102, carrying out gradient transformation on the original image to obtain a gradient image, and highlighting the edge of the original image.
S103, smoothing the gradient image by using a 5x5 mean filter to obtain a smooth image, so as to achieve the effect of noise reduction.
And S104, masking the transformed image by using the smooth image to obtain a masked image.
And S105, expanding the gray scale range of the image, and performing power-rate conversion processing on the masking image to obtain a sharpened image.
As shown in fig. 3, the clustering operation can effectively solve the problem of classification errors, perform clustering operation, and perform class labeling on pictures with common characteristic information, such pictures have more accurate labels, and when the convolutional training is performed, the convolutional neural network can reduce samples with classification errors and reduce the workload of difficult sample segmentation by learning the class labels.
Segmenting the sharpened image by using K-means clustering, wherein the K-means clustering comprises the following steps:
and S106, determining K initial clustering centers required by the K-means clustering by adopting kernel density estimation.
The kernel density estimation is a non-parametric estimation modeling method, and a probability density function can be directly estimated from continuous change values under image pixels without assuming the distribution of pixel values in advance, so that a smooth estimation curve is obtained. We select the Gaussian function as the kernel function, and select the pixel gray values of m grid points with equal probability as the observed values.
Gaussian kernel density estimation function relation:
Figure BDA0002517028960000051
wherein with yjAnd selecting m lattice points in the image with equal probability, wherein the window width is h.
Figure BDA0002517028960000052
j=1,2,3...m,m=256
Figure BDA0002517028960000053
Wherein IxyIs the gray value of the pixel point in the x-th row and the y-th columnjIs the gray scale value of the jth grid point, Max (I)xy) Is the maximum value of the image pixel, Min (I)xy) Is the minimum value of the image pixel gray scale, h is the window width, STD is the standard deviation of the image pixels, IQR is the quartering difference of the image pixels, and n is the number of pixels of the scanned image.
S107, assigning each point to the centroid with the nearest distance to form K clusters,
and S108, calculating the centroid of each cluster again,
and S109, judging whether the mass center is changed.
And S1010, if the centroid changes, returning to S107, continuing to operate until the centroid does not change any more, and outputting the preprocessed image.
As shown in fig. 4, by adopting hard sample segmentation, the error of judging the positive sample as the negative sample is reduced during class classification, the number of the positive samples is increased, so that the convolutional neural network obtains more effective information during training and learning, and the problems of low recall rate of target detection and unbalanced samples can be solved. And the target identification precision of the classification network is improved.
The difficult sample segmentation of the preprocessed image comprises the following steps:
s201, classifying the images, and classifying the preprocessed images by using a classifier;
the classifier is part of a convolutional neural network, and classifies the classes of the instances in the pre-processed image according to the information learned from the pre-processed image by the convolutional neural network.
S202, in the classifier, the preprocessed image is processed, the IOU value of the preprocessed image is calculated, and the IOU threshold value is set to be 0.5.
IOU value: the IOU value is often used as a criterion for target detection determination to determine detection accuracy. During target detection, there is a basic operation, namely, a target frame to be marked is framed out through a frame, and during detection, the target frame is also framed, wherein the calculation mode is represented by the following formula:
Figure BDA0002517028960000061
DetectionResult represents the result obtained through the neural network, and groudtruth represents the labeled result.
S203, comparing the IOU value of the preprocessed image with a set IOU threshold value, and taking the preprocessed image with the IOU value larger than the IOU threshold value as a difficultly sample segmented image to output.
Inputting the image with the IOU value of the preprocessed image smaller than the IOU threshold value into the classifier again for classification, recalculating the IOU value of the preprocessed image and comparing the IOU threshold value, outputting the image with the IOU value of the preprocessed image larger than the IOU threshold value, repeatedly classifying the image with the IOU value smaller than the IOU threshold value, calculating the IOU value and comparing the IOU threshold value, and repeatedly performing iterative computation until the IOU value of the preprocessed image is larger than the set IOU threshold value.
And S204, outputting the preprocessed image with the IOU value larger than the IOU threshold value as a segmentation image as the input of the convolutional neural network.
Preferably, the images are convolution trained by a convolutional neural network.
FIG. 5, the convolutional neural network includes Deformable ROI Align. The convolutional neural network comprises conv1, conv2, conv3, conv4, RPN, Deformable ROI Align, conv5, conv6, conv7, mask head and mask iou head. conv1, conv2, conv3 and conv4 comprise a convolution layer, a pooling layer and an activation layer, and conv5 comprises a full connection layer and an activation layer; conv6 and conv7 are full connection layers.
Preprocessing difficult sample segmentation can be carried out in a convolutional neural network, the difficult sample segmentation step is completed through conv1, conv2 and conv3 in the convolutional neural network, segmented image output is obtained, the segmented image is input into conv4 to be subjected to feature extraction again, a first feature image is generated, and the first feature image is input into an RPN layer and a Deformable ROI Align layer. Inputting a first feature image into an RPN layer, outputting a region image by the RPN layer, inputting the first feature image and the region image into a Deformable ROI Align layer, outputting a second feature image by the Deformable ROI Align layer, inputting the second feature image into conv5, outputting a convolution kernel by conv5, outputting a classification probability by the convolution kernel via conv6, outputting a bounding box by the convolution kernel via conv7, inputting the convolution kernel into a mask head and then inputting into a mask iou head, outputting a mask image by the mask iou head, and combining the mask image with the classification and bounding boxes to form a segmentation image with a segmentation result.
In the RPN layer, the characteristic image passes through a convolution layer and an activation layer to obtain a plurality of Anchor boxes, two complete convolutions are performed, wherein one complete convolution performs two classification on 9 Anchor boxes pixel by pixel, the convolution layer performs cutting filtering on 9 Anchor boxes, and then the Anchor boxes are judged to belong to a foreground or a background, namely, an object or not; then, by reshape, region propofol was obtained.
Meanwhile, the base layer of the other volume obtains four coordinate offset information of 9 Anchor boxes pixel by pixel, and the Anchor boxes are corrected to generate region probes. Overlapping frames can be further removed, the Anchor box is roughly screened, and the front n Anchor boxes are taken, so that when the Deformable ROI Align is entered, the region probable is only n, n is preferably 500, the calculation workload can be reduced, and the calculation speed is increased.
And inputting the region proxy generated in the RPN layer and the first characteristic image into the Deformable ROI Align layer at the Deformable ROI Align layer, and mapping the region proxy onto the first characteristic image by the Deformable ROI Align layer to obtain a second characteristic image. During mapping, a bilinear interpolation method is used for obtaining image numerical values on pixel points of which the coordinates are floating point numbers, so that the whole feature aggregation process is converted into a continuous operation, and the learned offset is calculated.
The second feature image is input into conv5, the convolution kernel is output by conv5,
the convolution kernel outputs classification through conv6, calculates which category each region project belongs to (such as human, horse, car, etc.),
outputting a bounding box by the convolution kernel through conv7 to obtain the position offset of each region propofol for regression to obtain a more accurate target detection frame;
as shown in fig. 6, the convolution kernel is input to the mask head 40, and the mask head 40 includes 5 convolution layers (C401, C402, C403, C404, and C405), and mask generation is realized by the mask head.
As shown in fig. 5 and 6, the second feature image generated by the Deformable ROI Align and the mask generated by the mask head are input to the mask iou head, the mask iou head includes 4 convolution layers (C501, C502, C503, C504) and three full-connected layers (FC505, FC506, FC507), the mask image is output by the mask iou head, and the mask image is combined with the classification and bounding box to form a segmentation image having a segmentation result finally.
Referring to fig. 7, a hard sample instance segmentation apparatus 60 includes:
the image preprocessing module 601 is used for enabling the foreground and the background of the image to be easily distinguished;
a hard sample segmentation module 602 of the image, configured to distinguish a positive sample from a negative sample in the image;
and the convolution training module 603 of the image is used for example segmentation of the difficult samples in the image.
As fig. 8, a computer-readable storage medium 70 for hard sample instance segmentation, having stored thereon a computer program 701, the computer program 701, when executed by a processor, implementing the steps of a hard sample instance segmentation method.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for partitioning hard sample instances, comprising:
preprocessing an image, wherein the image is used for enabling the foreground and the background of the image to be easily distinguished;
the method comprises the following steps of (1) difficult sample segmentation of an image, wherein the difficult sample segmentation is used for distinguishing a positive sample and a negative sample in the image;
and (4) performing convolution training on the image, wherein the convolution training is used for example segmentation of a difficult sample in the image.
2. The hard sample instance partitioning method according to claim 1, wherein: and preprocessing the image data by utilizing a sharpening and clustering method.
3. The hard sample instance partitioning method according to claim 2, wherein: the sharpening is laplacian sharpening and the clustering is K-means clustering.
4. The hard sample instance partitioning method according to claim 1, wherein: the image difficult sample segmentation comprises the following steps:
classifying the images, namely classifying the preprocessed images by using a classifier;
calculating an image IOU value, calculating the IOU value of the classified image, and setting an IOU threshold value;
and comparing the IOU value with a set IOU threshold value, and outputting the image with the IOU value larger than the IOU threshold value.
5. The hard sample instance partitioning method according to claim 4, wherein: the IOU threshold is 0.5.
6. The hard sample instance partitioning method according to claim 4, wherein: comparing the IOU value with a set IOU threshold value, inputting the image with the IOU value smaller than the IOU threshold value into the classifier again for classification, recalculating the IOU value and comparing the IOU threshold value, outputting the image with the IOU value larger than the IOU threshold value, repeatedly classifying the image with the IOU value smaller than the IOU threshold value, calculating the IOU value and comparing the IOU threshold value.
7. The hard sample instance partitioning method according to claim 1, wherein: and carrying out convolution training on the image through a convolution neural network.
8. The hard sample instance partitioning method according to claim 7, wherein: the convolutional neural network comprises Deformable ROI Align.
9. A hard sample instance segmentation apparatus, comprising:
the image preprocessing module is used for enabling the foreground and the background of the image to be easily distinguished;
the image hard sample segmentation module is used for distinguishing a positive sample and a negative sample in the image;
and the convolution training module of the image is used for example segmentation of the difficult samples in the image.
10. A computer-readable storage medium for hard sample instance segmentation, characterized in that: the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the hard sample instance segmentation method as claimed in any one of claims 1 to 8.
CN202010480111.2A 2020-05-30 2020-05-30 Method, equipment and computer readable storage medium for dividing difficult sample instance Active CN111627033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010480111.2A CN111627033B (en) 2020-05-30 2020-05-30 Method, equipment and computer readable storage medium for dividing difficult sample instance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010480111.2A CN111627033B (en) 2020-05-30 2020-05-30 Method, equipment and computer readable storage medium for dividing difficult sample instance

Publications (2)

Publication Number Publication Date
CN111627033A true CN111627033A (en) 2020-09-04
CN111627033B CN111627033B (en) 2023-10-20

Family

ID=72271378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010480111.2A Active CN111627033B (en) 2020-05-30 2020-05-30 Method, equipment and computer readable storage medium for dividing difficult sample instance

Country Status (1)

Country Link
CN (1) CN111627033B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784835A (en) * 2021-01-21 2021-05-11 恒安嘉新(北京)科技股份公司 Method and device for identifying authenticity of circular seal, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516302A (en) * 2017-08-31 2017-12-26 北京无线电计量测试研究所 A kind of method of the mixed image enhancing based on OpenCV
CN110288082A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Convolutional neural networks model training method, device and computer readable storage medium
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN111046880A (en) * 2019-11-28 2020-04-21 中国船舶重工集团公司第七一七研究所 Infrared target image segmentation method and system, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516302A (en) * 2017-08-31 2017-12-26 北京无线电计量测试研究所 A kind of method of the mixed image enhancing based on OpenCV
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN110288082A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Convolutional neural networks model training method, device and computer readable storage medium
CN111046880A (en) * 2019-11-28 2020-04-21 中国船舶重工集团公司第七一七研究所 Infrared target image segmentation method and system, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高云;郭继亮;黎煊;雷明刚;卢军;童宇;: "基于深度学习的群猪图像实例分割方法", 农业机械学报, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784835A (en) * 2021-01-21 2021-05-11 恒安嘉新(北京)科技股份公司 Method and device for identifying authenticity of circular seal, electronic equipment and storage medium
CN112784835B (en) * 2021-01-21 2024-04-12 恒安嘉新(北京)科技股份公司 Method and device for identifying authenticity of circular seal, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111627033B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111860670B (en) Domain adaptive model training method, image detection method, device, equipment and medium
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN111191583B (en) Space target recognition system and method based on convolutional neural network
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN108038435B (en) Feature extraction and target tracking method based on convolutional neural network
CN108537239B (en) Method for detecting image saliency target
CN107516316B (en) Method for segmenting static human body image by introducing focusing mechanism into FCN
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
CN112308860A (en) Earth observation image semantic segmentation method based on self-supervision learning
CN113065546B (en) Target pose estimation method and system based on attention mechanism and Hough voting
CN110378911B (en) Weak supervision image semantic segmentation method based on candidate region and neighborhood classifier
CN111179193B (en) Dermatoscope image enhancement and classification method based on DCNNs and GANs
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN113592894B (en) Image segmentation method based on boundary box and co-occurrence feature prediction
CN113011288A (en) Mask RCNN algorithm-based remote sensing building detection method
CN110827304A (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolutional network and level set method
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652273A (en) Deep learning-based RGB-D image classification method
CN116433704A (en) Cell nucleus segmentation method based on central point and related equipment
CN113420648B (en) Target detection method and system with rotation adaptability
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN114565605A (en) Pathological image segmentation method and device
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN111627033B (en) Method, equipment and computer readable storage medium for dividing difficult sample instance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant