CN112508128A - Training sample construction method, counting method, device, electronic equipment and medium - Google Patents

Training sample construction method, counting method, device, electronic equipment and medium Download PDF

Info

Publication number
CN112508128A
CN112508128A CN202011532393.2A CN202011532393A CN112508128A CN 112508128 A CN112508128 A CN 112508128A CN 202011532393 A CN202011532393 A CN 202011532393A CN 112508128 A CN112508128 A CN 112508128A
Authority
CN
China
Prior art keywords
image
segmentation
segmentation result
instances
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011532393.2A
Other languages
Chinese (zh)
Other versions
CN112508128B (en
Inventor
陈路燕
聂磊
邹建法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011532393.2A priority Critical patent/CN112508128B/en
Publication of CN112508128A publication Critical patent/CN112508128A/en
Application granted granted Critical
Publication of CN112508128B publication Critical patent/CN112508128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a construction method, a counting method, a device, electronic equipment and a medium of a training sample, and relates to the technical field of image processing. The specific implementation scheme is as follows: acquiring an original image to be marked; carrying out instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image; transmitting the first example segmentation result to a sticky segmentation platform, and acquiring a second example segmentation result fed back by the sticky segmentation platform; the second example segmentation result comprises a segmentation result of the sticky example in the first example segmentation result; marking the original image according to the second instance segmentation result, and taking the marked image as a training sample of the instance segmentation model, so that the sticky instance in the original image can be accurately segmented, the counting accuracy can be improved when the part counting is realized according to the training sample, and the labor consumption is reduced.

Description

Training sample construction method, counting method, device, electronic equipment and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for constructing a training sample, a method for counting, an apparatus, an electronic device, and a medium.
Background
With the rapid development of electronic technology, electronic components tend to be miniaturized, which is difficult for counting and housing of the electronic components. However, the electronic processing industry is very strict in managing the number of electronic components, and efficient production management of various parts storage and counting is required.
At present, the parts are mainly pasted on a material tape at the same intervals by a Surface Mount Technology (SMT), and then the material tape is wound together to form a material tray, so as to achieve the storage and management of the parts. When counting the parts, the estimation is performed by the length of the tape and the interval when the parts are pasted.
However, the front and rear portions of the tape are usually free from the adhesive member, and the start position and the end position of the tape adhesive member need to be manually determined, which is a large labor consumption. In addition, when the parts are pasted, the intervals cannot be guaranteed to be completely the same, and estimation errors exist.
Disclosure of Invention
The application provides a construction method, a counting method, a device, electronic equipment and a medium of a training sample.
According to an aspect of the present application, there is provided a method for constructing a training sample, including:
acquiring an original image to be marked, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of sticky instances, and the spacing distance between the sticky instances is smaller than or equal to a standard segmentation distance;
carrying out example segmentation on the original image by adopting an image recognition technology to obtain a first example segmentation result corresponding to the original image;
transmitting the first example segmentation result to a sticky segmentation platform, and acquiring a second example segmentation result fed back by the sticky segmentation platform; wherein the second instance segmentation result comprises segmentation results of at least two sticky instances in the first instance segmentation result;
and labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of an example segmentation model.
According to another aspect of the present application, there is provided a counting method including:
acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of sticky instances, and the spacing distance between the sticky instances is smaller than or equal to a standard segmentation distance;
inputting the target image into a pre-trained example segmentation model, and acquiring an example segmentation result corresponding to the target image;
the example segmentation model is obtained by training a training sample generated by using the training sample construction method according to any embodiment of the application;
and determining the quantity value of the image instances included in the target image according to the instance segmentation result.
According to another aspect of the present application, there is provided a training sample constructing apparatus including:
the device comprises an original image acquisition module, a segmentation module and a segmentation module, wherein the original image acquisition module is used for acquiring an original image to be labeled, the original image comprises a plurality of image instances, the image instances comprise a plurality of bonding instances, and the spacing distance between the bonding instances is smaller than or equal to a standard segmentation distance;
a first example segmentation result acquisition module, configured to perform example segmentation on the original image by using an image recognition technology, and acquire a first example segmentation result corresponding to the original image;
the second example segmentation result acquisition module is used for transmitting the first example segmentation result to the adhesion segmentation platform and acquiring a second example segmentation result fed back by the adhesion segmentation platform; wherein the second instance segmentation result comprises segmentation results of at least two sticky instances in the first instance segmentation result;
and the training sample acquisition module is used for labeling the original image according to the second example segmentation result and taking the labeled image as a training sample of the example segmentation model.
According to another aspect of the present application, there is provided a counting apparatus comprising:
the target image acquisition module is used for acquiring a target image, the target image comprises a plurality of image instances, the image instances comprise a plurality of bonding instances, and the spacing distance between the bonding instances is smaller than or equal to the standard segmentation distance;
the example segmentation result acquisition module is used for inputting the target image into a pre-trained example segmentation model and acquiring an example segmentation result corresponding to the target image;
the example segmentation model is obtained by training a training sample generated by using the training sample construction method according to any embodiment of the application;
and the quantity value determining module is used for determining the quantity value of the image example included in the target image according to the example segmentation result.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method according to any of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the embodiments of the present application.
According to another aspect of the application, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method of any of the embodiments of the application.
According to the technology of the application, the problem of generation of the training sample or the problem of counting of the electronic parts is solved, and the quality of construction of the training sample is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic flow chart diagram of a training sample construction method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a method for constructing a training sample according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a segmentation result obtained by using an example segmentation model according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of a counting method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus for constructing training samples according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a counting device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device of a construction method or a counting method of training samples according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flowchart of a method for constructing a training sample according to an embodiment of the present application, where the embodiment is suitable for a case where a training sample is constructed when an instance segmentation model is generated when an electronic part technology is implemented by a deep learning technology, and the method may be implemented by a training sample constructing apparatus, which may be implemented by software and/or hardware and is integrated in an electronic device, such as an SMT tray electric machine. Specifically, referring to fig. 1, the method specifically includes the following steps:
and step 110, obtaining an original image to be marked.
The original image comprises a plurality of image examples, the image examples comprise a plurality of sticky examples, and the spacing distance between the sticky examples is smaller than or equal to the standard segmentation distance.
In the embodiment of the application, the original image can be an SMT tray X-ray image. The electronic parts in the SMT tray are very close in distance, and the counting effect of the electronic parts by adopting a common color image is poor. The application selects the X-ray image to avoid the influence of colors, so that the counting of the electronic parts is more accurate. The raw image may be obtained by obtaining a data source from a downstream manufacturer through communication, or may be obtained by performing X-ray shooting on the SMT tray.
The image example can be an electronic part, and specifically, the image example can be that an original image is subjected to image segmentation through a traditional image algorithm to obtain individual electronic parts. For example, the original image may be image segmented using a conventional image algorithm using a standard segmentation distance. The standard dividing distance may be determined according to a preset interval when the electronic part is pasted, or the standard dividing distance may be determined according to a position of the electronic part identified by an image algorithm and a specific position of the electronic part.
The adhesion example may be an image in which the electronic parts are adhered on the tape in the image example. The separation distance between the sticky instances being less than or equal to the standard segmentation distance may be a sticky region present in the image instance. The adhesion area may be formed by overlapping a plurality of adhesion examples at the same position of the tape, or by partially overlapping a plurality of adhesion examples, so that each adhesion example cannot be split when the original image is segmented by the conventional image algorithm.
And 120, performing example segmentation on the original image by adopting an image recognition technology to obtain a first example segmentation result corresponding to the original image.
Wherein the first instance segmentation result may be a process of acquiring an image instance. In the present application, a conventional image algorithm may be used to perform image segmentation on an original image to obtain an image example. However, in order to reduce labor consumption and improve accuracy of image instance acquisition, thereby facilitating processing of subsequent sticky instances, an image recognition technology may be adopted to perform instance segmentation on an original image in the present application. The image recognition technique may be to perform optimization processing on the original image before performing image segmentation by using an image algorithm. For example, the optimization processing on the original image may be binarization processing, noise reduction processing, background expansion processing, or the like.
Step 130, the first instance segmentation result is transmitted to the sticky segmentation platform, and a second instance segmentation result fed back by the sticky segmentation platform is obtained.
The second example segmentation result comprises segmentation results of at least two sticky examples in the first example segmentation result.
The sticky segmentation platform may be a platform for segmenting sticky regions in an image instance. In the application, the sticky segmentation platform can obtain a re-marking result of the first instance segmentation result manually, namely a second instance segmentation result. The manual re-labeling may be performed to segment the sticky instances, for example, when there are multiple sticky instances in the first instance segmentation result, each sticky instance may be segmented in a manual re-labeling manner. The accuracy of image segmentation can be improved, and required labor consumption is low, so that radiation influence on people is avoided.
And 140, labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of the example segmentation model.
The annotation to the original image may include an image instance and an annotation of a sticky instance. The image instance and the sticky instance may be labeled in the original image with the same or different labels. The annotated images may serve as training samples for the example segmentation model. In the application, a large number of labeled images can be generated through the technical scheme described in the step 110 to the step 140, and the labeled images can be used as training samples of the example segmentation model, so that the accuracy and the reliability of the training of the example segmentation model can be improved, and the problems that the technical error of electronic parts is high, the labor consumption is large, and the adhesion area cannot be processed in the prior art can be solved.
According to the technical scheme of the embodiment of the application, the original image to be marked is obtained; carrying out instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image; transmitting the first example segmentation result to a sticky segmentation platform, and acquiring a second example segmentation result fed back by the sticky segmentation platform; and marking the original image according to the second example segmentation result, and using the marked image as a training sample of the example segmentation model, so that the problem of construction of the training sample of the example segmentation model adopted in counting of the electronic parts is solved, the reliability of generation of the training sample can be improved, the accuracy of counting of the electronic parts is improved conveniently, manpower is saved, and the harm of X-rays to a human body can be reduced.
Fig. 2 is a schematic flow chart of another training sample construction method according to an embodiment of the present application, which is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with one or more of the above embodiments.
Specifically, in an optional embodiment of the present application, an image recognition technology is adopted to perform example segmentation on an original image, and obtain a first example segmentation result corresponding to the original image, where the example segmentation result includes: and carrying out binarization processing on the original image, and carrying out connected domain segmentation on the binarized image to obtain a first example segmentation result.
In order to further improve the accuracy of image instance determination, in an optional embodiment of the present application, before performing connected domain segmentation on the binarized image, the method further includes: performing image background expansion processing on the image subjected to the binarization processing to increase the spacing distance between adjacent image instances; after the connected domain segmentation is performed on the image after the binarization processing to obtain a first example segmentation result, the method further includes: and performing edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image.
In an optional embodiment of the present application, performing edge optimization on the first example segmentation result according to pixel information of each pixel point in the original image, includes: and performing edge optimization on the first instance segmentation result by adopting a full-connection conditional random field (DenSeCRF) algorithm according to the pixel information of each pixel point in the original image.
Referring to fig. 2, the method specifically includes the following steps:
and step 210, obtaining an original image to be marked.
The original image comprises a plurality of image examples, the image examples comprise a plurality of sticky examples, and the spacing distance between the sticky examples is smaller than or equal to the standard segmentation distance; the original image is an X-ray image of the SMT material tray, and the image example is an electronic part.
And step 220, carrying out binarization processing on the original image.
Optionally, the original image is an X-ray image. Although an X-ray image is visible to the naked eye as a black-and-white image, in reality there are more than two gray values in an X-ray image, but there are multiple gray values. In order to exemplify the accuracy of segmentation, it is necessary to perform binarization processing on the X-ray image. The binarization image can be obtained by judging pixels with the gray scale larger than or equal to a threshold value in the original image as electronic parts, setting the gray scale value to be 255, and setting the rest gray scale values to be 0, so that the original image has an obvious black-and-white effect, and the connected domain segmentation of the image after the binarization processing is facilitated. The connected component may be a region having a grayscale value of 255, indicating a region in which one or more electronic components are located. By performing binarization processing and connected domain segmentation on the original image, a first example segmentation result can be obtained, that is, an image example in the original image can be determined.
And step 230, performing image background expansion processing on the binarized image to increase the spacing distance between adjacent image instances.
The image background expansion processing may be to add a pixel value to an edge of the image, so that the entire pixel value is expanded to achieve the expansion effect of the image background. The image background expansion processing can expand a white background area in the image after the binarization processing, and the spacing distance between image examples is increased, so that the image examples can be more conveniently segmented, and the number of electronic parts in the original image can be conveniently and accurately determined.
And 240, performing connected domain segmentation on the image subjected to the image background expansion processing to obtain a first example segmentation result.
After the image background expansion processing is performed on the image, connected domain marking and segmentation can be performed through a connected domain algorithm, and image masking (Mask) processing can be performed on an obtained result, specifically, the connected domain can be set to be 1, and the remaining area can be set to be 0.
And step 250, performing edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image by adopting a DenseCRF algorithm.
The result obtained by labeling and dividing the connected domain after the image background expansion processing is relatively rough, the edge shape of the original electronic part is easily lost, and the training effect obtained by directly training as a sample is poor.
According to the technical scheme of the embodiment of the application, after the connected domain is divided, edge optimization can be performed according to the pixel information of each pixel point in the original image. So that the edge shape of the electronic part in the resulting connected domain can be restored.
Illustratively, for an image to be segmented, each pixel point i of the image has a category label xiAnd the observed value yiThus, each pixel point is taken as a node, a pixel point and a pixelThe relationship between points as edges may constitute a conditional random field. Conditional random fields can satisfy Gibbs distribution, specifically, satisfy in fully-connected conditional random fields
Figure BDA0002852416760000081
Figure BDA0002852416760000082
And (4) distribution. Where I is the global observation, e (x) is the energy of class label x, e (x) ΣiΨu(xi)+∑i<jΨp(xi,xj). Wherein E (x) includes two terms, ΣiΨu(xi) Representing a unitary potential energy function representing the assignment of pixel points i to class labels xiThe energy of (2) is generally used in the segmentation task as a probability map of the previous segmentation model output. In this application, ΣiΨu(xi) The resulting 0 or 1 image is processed using an image Mask. Sigmai<jΨp(xi,xj) Is a binary potential energy function representing the simultaneous assignment of pixel points i and j to class labels xiAnd xjThe energy of (2) describes the relationship between the global pixel points and the pixel points, and encourages similar pixel points to distribute the same category label, and pixel points with larger differences distribute different category labels. The measurement of the similar pixel points is related to the positions and colors of the pixel points, and the pixel points with similar colors and similar positions are more easily allocated to the same category label. Through the DenseCRF algorithm, when the image is segmented, the connected domain in the image can be segmented at the boundary of the electronic part as much as possible, the shape of the electronic part can be kept, and the effect of the segmented image as a training sample is more accurate.
And step 260, transmitting the edge-optimized example segmentation result to the bonding segmentation platform, and acquiring a second example segmentation result fed back by the bonding segmentation platform.
The second example segmentation result comprises segmentation results of at least two sticky examples in the example segmentation results after the edge optimization.
And 270, labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of the example segmentation model.
And step 280, performing set machine learning model training according to the plurality of training samples to obtain an example segmentation model.
When the training samples are subjected to model training, more segmentation models can be selected. In general, segmentation models can be divided into semantic segmentation models and instance segmentation models. When the semantic segmentation model identifies a plurality of similar individuals with relatively close distances in the image, the semantic segmentation model may label the plurality of similar individuals only once, and when the example segmentation model identifies individuals in the image, each similar individual can be labeled respectively even if the distances of two similar individuals are close again.
In the application, in order to accurately count the electronic parts by using the trained example segmentation model subsequently, the example segmentation model in the segmentation model may be used as a set machine learning model, and model training is performed according to the constructed training sample to obtain the trained example segmentation model.
In an alternative embodiment of the present application, the set machine learning model comprises a MaskRCNN model. The individual can be more finely divided, and for individuals of the same type, the individual can be divided into different individuals even if the individuals are adhered together.
FIG. 3 is a diagram illustrating a segmentation result obtained by using an example segmentation model according to an embodiment of the present application. As shown in fig. 3, by using the example segmentation model trained by the present application, a large amount of parameters do not need to be adjusted for different electronic components, and the precise segmentation result of each electronic component can be obtained without additionally segmenting the bonding region.
According to the technical scheme of the embodiment of the application, the original image to be marked is obtained; carrying out binarization processing on an original image; performing image background expansion processing on the image subjected to the binarization processing to increase the spacing distance between adjacent image instances; performing connected domain segmentation on the image subjected to image background expansion processing to obtain a first example segmentation result; performing edge optimization on the first instance segmentation result by adopting a DenseCrF algorithm according to the pixel information of each pixel point in the original image; transmitting the example segmentation result after the edge optimization to a bonding segmentation platform, and acquiring a second example segmentation result fed back by the bonding segmentation platform; marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model; the machine learning model training is set according to the training samples to obtain the instance segmentation model, the training problem of the instance segmentation model adopted in counting of the electronic parts is solved, the reliability of generation of the training samples can be improved, the reliability of the training result of the model is improved, the practicability of the model is improved, the accuracy of counting of the electronic parts is improved conveniently, manpower is saved, and the damage of X-rays to a human body can be reduced.
Fig. 4 is a schematic flowchart of a counting method according to an embodiment of the present application, where the counting method is suitable for counting electronic components, especially electronic components in an SMT tray, and the counting method can be implemented by a counting device, which can be implemented by software and/or hardware and integrated in an electronic device such as an SMT tray. Specifically, referring to fig. 4, the method specifically includes the following steps:
step 310, acquiring a target image.
The target image comprises a plurality of image examples, the image examples comprise a plurality of sticky examples, and the spacing distance between the sticky examples is smaller than or equal to the standard segmentation distance.
The target image may be a data source obtained from a downstream manufacturer through communication, or may be an image taken on an SMT tray for electronic part counting. In particular, the target image may be an X-ray image of the SMT tray.
And step 320, inputting the target image into a pre-trained example segmentation model, and acquiring an example segmentation result corresponding to the target image.
The example segmentation model is trained by using a training sample generated by the training sample construction method provided by any embodiment of the application.
The example segmentation of the target image in the application is obtained by deep learning through the example segmentation model provided by the application. The example segmentation model is generated based on training samples constructed in advance in the application. According to the method for obtaining the example segmentation result of the target image, parameter adjustment on different electronic parts is not needed, and example segmentation is simplified; manual participation is not needed in each instance segmentation, and the manual consumption is reduced; the accuracy of instance segmentation can be improved without additional processing of the sticky region.
Step 330, determining the quantity value of the image instance included in the target image according to the instance segmentation result.
The electronic parts are divided one by one to form image examples in example division results obtained by example division of the target image through the example division model. Because the sticky region is considered during model training, the sticky region does not exist in the obtained example segmentation result, only one electronic part exists in one image example, and the quantity of the electronic parts can be determined by determining the quantity value of the image example. The accuracy of electronic part count can be improved, the parameter also need not to adjust to the electronic part count of difference, and is convenient and accurate.
According to the technical scheme of the embodiment of the application, the target image is obtained; inputting a target image into a pre-trained example segmentation model, and acquiring an example segmentation result corresponding to the target image; according to the example segmentation result, the quantity value of the image example included in the target image is determined, the counting problem of the electronic parts is solved, the counting accuracy of the electronic parts is improved, the counting complexity of the electronic parts is simplified, different electronic parts do not need to be respectively subjected to parameter adjustment, a bonding area does not need to be processed, labor is saved, and the effect of harm of X-rays to a human body can be reduced.
Fig. 5 is a schematic structural diagram of an apparatus for constructing training samples according to an embodiment of the present disclosure, which may be disposed in an electronic device such as an SMT tray. Specifically, as shown in fig. 5, the apparatus includes: a raw image acquisition module 510, a first example segmentation result acquisition module 520, a second example segmentation result acquisition module 530, and a training sample acquisition module 540.
The original image obtaining module 510 is configured to obtain an original image to be labeled, where the original image includes multiple image instances, the image instances include multiple sticky instances, and a separation distance between the sticky instances is less than or equal to a standard segmentation distance;
a first example segmentation result obtaining module 520, configured to perform example segmentation on the original image by using an image recognition technology, and obtain a first example segmentation result corresponding to the original image;
a second example segmentation result obtaining module 530, configured to transmit the first example segmentation result to the sticky segmentation platform, and obtain a second example segmentation result fed back by the sticky segmentation platform; the second example segmentation result comprises segmentation results of at least two sticky examples in the first example segmentation result;
and the training sample obtaining module 540 is configured to label the original image according to the second example segmentation result, and use the labeled image as a training sample of the example segmentation model.
Optionally, the first example segmentation result obtaining module 520 includes:
and the first example segmentation result acquisition unit is used for carrying out binarization processing on the original image and carrying out connected domain segmentation on the image after the binarization processing to obtain a first example segmentation result.
Optionally, the apparatus further includes:
the expansion processing module is used for performing image background expansion processing on the image after the binarization processing before performing connected domain segmentation on the image after the binarization processing so as to increase the spacing distance between adjacent image instances;
and the edge optimization module is used for performing edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image after the connected domain segmentation is performed on the image subjected to the binarization processing to obtain the first example segmentation result.
Optionally, the edge optimization module includes:
and the edge optimization unit is used for performing edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenSecRF algorithm.
Optionally, the original image is an X-ray image of the SMT tray with surface mount technology, and the image example is an electronic component.
Optionally, the apparatus further includes:
and the example segmentation model acquisition module is used for performing set machine learning model training according to a plurality of training samples after the marked image is used as a training sample of the example segmentation model to obtain the example segmentation model.
Optionally, the example segmentation model comprises a MaskRCNN model.
The device for constructing the training sample provided by the embodiment of the application can execute the method for constructing the training sample provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 6 is a schematic structural diagram of a counting device according to an embodiment of the present disclosure, which may be disposed in an electronic device such as an SMT tray. Specifically, as shown in fig. 6, the apparatus includes: a target image acquisition module 610, an example segmentation result acquisition module 620, and a numerical value determination module 630.
The target image obtaining module 610 is configured to obtain a target image, where the target image includes multiple image instances, the image instances include multiple sticky instances, and a separation distance between the sticky instances is smaller than or equal to a standard segmentation distance;
an example segmentation result obtaining module 620, configured to input the target image into a pre-trained example segmentation model, and obtain an example segmentation result corresponding to the target image;
the example segmentation model is obtained by training a training sample generated by using the training sample construction method provided by any embodiment of the application;
and a quantity value determining module 630, configured to determine a quantity value of the image instance included in the target image according to the instance segmentation result.
The counting device provided by the embodiment of the application can execute the counting method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as the construction method or the counting method of the training sample. For example, in some embodiments, the construction method or the counting method of the training samples may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above described construction method or counting method of training samples may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g. by means of firmware) to perform the construction method or the counting method of the training samples.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (19)

1. A method for constructing a training sample comprises the following steps:
acquiring an original image to be marked, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of sticky instances, and the spacing distance between the sticky instances is smaller than or equal to a standard segmentation distance;
carrying out example segmentation on the original image by adopting an image recognition technology to obtain a first example segmentation result corresponding to the original image;
transmitting the first example segmentation result to a sticky segmentation platform, and acquiring a second example segmentation result fed back by the sticky segmentation platform; wherein the second instance segmentation result comprises segmentation results of at least two sticky instances in the first instance segmentation result;
and labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of an example segmentation model.
2. The method of claim 1, wherein performing instance segmentation on the original image by using an image recognition technique to obtain a first instance segmentation result corresponding to the original image comprises:
and carrying out binarization processing on the original image, and carrying out connected domain segmentation on the binarized image to obtain the first example segmentation result.
3. The method according to claim 2, before performing connected component segmentation on the binarized image, further comprising:
performing image background expansion processing on the image subjected to binarization processing to increase the spacing distance between adjacent image instances;
after the connected domain segmentation is performed on the binarized image to obtain the first example segmentation result, the method further includes:
and performing edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image.
4. The method of claim 3, wherein performing edge optimization on the first instance segmentation result according to pixel information of each pixel point in the original image comprises:
and performing edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenSeCRF algorithm.
5. The method of claim 1, wherein the raw image is a Surface Mount Technology (SMT) tray X-ray image and the image instances are electronic parts.
6. The method of any of claims 1-5, further comprising, after segmenting the model's training samples using the annotated image as an instance,:
and performing set machine learning model training according to the plurality of training samples to obtain the example segmentation model.
7. The method of claim 6, wherein the set machine learning model comprises a MaskRCNN model.
8. A counting method, comprising:
acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of sticky instances, and the spacing distance between the sticky instances is smaller than or equal to a standard segmentation distance;
inputting the target image into a pre-trained example segmentation model, and acquiring an example segmentation result corresponding to the target image;
wherein the example segmentation model is trained by using a training sample generated by the training sample construction method according to any one of claims 1 to 7;
and determining the quantity value of the image instances included in the target image according to the instance segmentation result.
9. A training sample construction apparatus comprising:
the device comprises an original image acquisition module, a segmentation module and a segmentation module, wherein the original image acquisition module is used for acquiring an original image to be labeled, the original image comprises a plurality of image instances, the image instances comprise a plurality of bonding instances, and the spacing distance between the bonding instances is smaller than or equal to a standard segmentation distance;
a first example segmentation result acquisition module, configured to perform example segmentation on the original image by using an image recognition technology, and acquire a first example segmentation result corresponding to the original image;
the second example segmentation result acquisition module is used for transmitting the first example segmentation result to the adhesion segmentation platform and acquiring a second example segmentation result fed back by the adhesion segmentation platform; wherein the second instance segmentation result comprises segmentation results of at least two sticky instances in the first instance segmentation result;
and the training sample acquisition module is used for labeling the original image according to the second example segmentation result and taking the labeled image as a training sample of the example segmentation model.
10. The apparatus of claim 9, wherein the first instance segmentation result obtaining module comprises:
and the first example segmentation result acquisition unit is used for carrying out binarization processing on the original image and carrying out connected domain segmentation on the binarized image to obtain a first example segmentation result.
11. The apparatus of claim 10, further comprising:
the expansion processing module is used for performing image background expansion processing on the image after the binarization processing before performing connected domain segmentation on the image after the binarization processing so as to increase the spacing distance between the adjacent image instances;
and the edge optimization module is used for performing edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image after the connected domain segmentation is performed on the image subjected to the binarization processing to obtain the first example segmentation result.
12. The apparatus of claim 11, wherein the edge optimization module comprises:
and the edge optimization unit is used for performing edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenSeCRF algorithm.
13. The apparatus of claim 9, wherein the raw image is a Surface Mount Technology (SMT) tray X-ray image and the image instances are electronic parts.
14. The apparatus of any of claims 9-13, further comprising:
and the example segmentation model acquisition module is used for performing set machine learning model training according to a plurality of training samples after the marked images are used as the training samples of the example segmentation model to obtain the example segmentation model.
15. The apparatus of claim 14, wherein the instance segmentation model comprises a MaskRCNN model.
16. A counting device, comprising:
the target image acquisition module is used for acquiring a target image, the target image comprises a plurality of image instances, the image instances comprise a plurality of bonding instances, and the spacing distance between the bonding instances is smaller than or equal to the standard segmentation distance;
the example segmentation result acquisition module is used for inputting the target image into a pre-trained example segmentation model and acquiring an example segmentation result corresponding to the target image;
wherein the example segmentation model is trained by using a training sample generated by the training sample construction method according to any one of claims 1 to 7;
and the quantity value determining module is used for determining the quantity value of the image example included in the target image according to the example segmentation result.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7, or the method of claim 8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-7, or the method of claim 8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-7, or the method of claim 8.
CN202011532393.2A 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium Active CN112508128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011532393.2A CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011532393.2A CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112508128A true CN112508128A (en) 2021-03-16
CN112508128B CN112508128B (en) 2023-07-25

Family

ID=74923338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011532393.2A Active CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112508128B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379784A (en) * 2021-05-30 2021-09-10 南方医科大学 Counting method of SMT material tray electronic components based on X-ray projection
CN113947771A (en) * 2021-10-15 2022-01-18 北京百度网讯科技有限公司 Image recognition method, apparatus, device, storage medium, and program product
CN114170483A (en) * 2022-02-11 2022-03-11 南京甄视智能科技有限公司 Training and using method, device, medium and equipment of floater identification model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095957A (en) * 2014-05-12 2015-11-25 浙江理工大学 Silkworm cocoon counting method based on image segmentation
US9939272B1 (en) * 2017-01-06 2018-04-10 TCL Research America Inc. Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach
US20180253622A1 (en) * 2017-03-06 2018-09-06 Honda Motor Co., Ltd. Systems for performing semantic segmentation and methods thereof
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109801308A (en) * 2018-12-28 2019-05-24 西安电子科技大学 The dividing method of adhesion similar round target image
CN109919159A (en) * 2019-01-22 2019-06-21 西安电子科技大学 A kind of semantic segmentation optimization method and device for edge image
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN111986183A (en) * 2020-08-25 2020-11-24 中国科学院长春光学精密机械与物理研究所 Chromosome scattergram image automatic segmentation and identification system and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095957A (en) * 2014-05-12 2015-11-25 浙江理工大学 Silkworm cocoon counting method based on image segmentation
US9939272B1 (en) * 2017-01-06 2018-04-10 TCL Research America Inc. Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach
US20180253622A1 (en) * 2017-03-06 2018-09-06 Honda Motor Co., Ltd. Systems for performing semantic segmentation and methods thereof
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109801308A (en) * 2018-12-28 2019-05-24 西安电子科技大学 The dividing method of adhesion similar round target image
CN109919159A (en) * 2019-01-22 2019-06-21 西安电子科技大学 A kind of semantic segmentation optimization method and device for edge image
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN111986183A (en) * 2020-08-25 2020-11-24 中国科学院长春光学精密机械与物理研究所 Chromosome scattergram image automatic segmentation and identification system and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A KIRILLOV ET AL: "InstanceCut: from Edges to Instances with MultiCut", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
高云;郭继亮;黎煊;雷明刚;卢军;童宇;: "基于深度学习的群猪图像实例分割方法", 农业机械学报, no. 04 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379784A (en) * 2021-05-30 2021-09-10 南方医科大学 Counting method of SMT material tray electronic components based on X-ray projection
CN113379784B (en) * 2021-05-30 2022-03-25 南方医科大学 Counting method of SMT material tray electronic components based on X-ray projection
CN113947771A (en) * 2021-10-15 2022-01-18 北京百度网讯科技有限公司 Image recognition method, apparatus, device, storage medium, and program product
CN114170483A (en) * 2022-02-11 2022-03-11 南京甄视智能科技有限公司 Training and using method, device, medium and equipment of floater identification model
CN114170483B (en) * 2022-02-11 2022-05-20 南京甄视智能科技有限公司 Training and using method, device, medium and equipment of floater identification model

Also Published As

Publication number Publication date
CN112508128B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN113378833B (en) Image recognition model training method, image recognition device and electronic equipment
CN112508128B (en) Training sample construction method, counting device, electronic equipment and medium
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN114419035B (en) Product identification method, model training device and electronic equipment
CN113657483A (en) Model training method, target detection method, device, equipment and storage medium
CN113537192A (en) Image detection method, image detection device, electronic equipment and storage medium
CN113255501A (en) Method, apparatus, medium, and program product for generating form recognition model
CN113610809A (en) Fracture detection method, fracture detection device, electronic device, and storage medium
CN112560936A (en) Model parallel training method, device, equipment, storage medium and program product
CN112508005A (en) Method, apparatus, device and storage medium for processing image
CN112580620A (en) Sign picture processing method, device, equipment and medium
CN116309963A (en) Batch labeling method and device for images, electronic equipment and storage medium
CN114882313B (en) Method, device, electronic equipment and storage medium for generating image annotation information
CN114677566B (en) Training method of deep learning model, object recognition method and device
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN114120410A (en) Method, apparatus, device, medium and product for generating label information
CN113850072A (en) Text emotion analysis method, emotion analysis model training method, device, equipment and medium
CN115809687A (en) Training method and device for image processing network
CN113554068A (en) Semi-automatic labeling method and device for instance segmentation data set and readable medium
CN113032251A (en) Method, device and storage medium for determining service quality of application program
CN111311604A (en) Method and apparatus for segmenting an image
CN113361524B (en) Image processing method and device
CN113408633B (en) Method, apparatus, device and storage medium for outputting information
CN114494818B (en) Image processing method, model training method, related device and electronic equipment
CN115965075A (en) Character recognition model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant