CN114723646A - Image data generation method with label, device, storage medium and electronic equipment - Google Patents

Image data generation method with label, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114723646A
CN114723646A CN202210179838.6A CN202210179838A CN114723646A CN 114723646 A CN114723646 A CN 114723646A CN 202210179838 A CN202210179838 A CN 202210179838A CN 114723646 A CN114723646 A CN 114723646A
Authority
CN
China
Prior art keywords
image
sub
background image
target
foreground object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210179838.6A
Other languages
Chinese (zh)
Inventor
陈奕名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuda Dongfang Software Technology Co ltd
Original Assignee
Beijing Yuda Dongfang Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuda Dongfang Software Technology Co ltd filed Critical Beijing Yuda Dongfang Software Technology Co ltd
Priority to CN202210179838.6A priority Critical patent/CN114723646A/en
Publication of CN114723646A publication Critical patent/CN114723646A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a method and an apparatus for generating image data with annotations, a storage medium, and an electronic device, to solve the problems in the related art. The image data generation method with the label comprises the following steps: inputting the first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first sub-image of a foreground object to be generated in the background image; segmenting the background image according to the frame coordinates to obtain a first subimage and a second subimage; inputting the second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model, wherein the target sub-image comprises a target foreground object; and at least taking the frame coordinates as the labeling information of the target foreground object in the target subimage, and fusing the labeled target subimage with the second subimage to obtain image data with labels.

Description

Image data generation method with label, device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image generation technologies, and in particular, to a method and an apparatus for generating image data with annotations, a storage medium, and an electronic device.
Background
In the field of artificial intelligence, huge training data is often needed in order to train to obtain a model with better quality. In the related art, in order to ensure the accuracy of training data, model training data is often obtained by manually labeling data. However, the way of manually annotating data is inefficient.
Disclosure of Invention
The present disclosure is directed to a method, an apparatus, a storage medium, and an electronic device for generating image data with annotations, so as to solve the problems in the related art.
In order to achieve the above object, according to a first aspect of the embodiments of the present disclosure, there is provided a method for generating image data with annotations, the method including:
inputting a first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first subimage of a foreground object to be generated in the background image;
segmenting the background image according to the frame coordinates to obtain a first sub-image and a second sub-image;
inputting a second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model, wherein the size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object;
and at least taking the frame coordinates as the labeling information of the target foreground object in the target subimage, and fusing the labeled target subimage with the second subimage to obtain image data with labels.
Optionally, the training process of the background map generation model includes:
inputting a random matrix sample into a background image generation model to be trained to obtain a synthetic background image output by the background image generation model to be trained;
inputting the synthesized background image into a foreground recognizer to obtain a first recognition result of whether a target foreground object exists in the synthesized background image or not, wherein the first recognition result is output by the foreground recognizer;
inputting the random matrix sample, the synthetic background image and a background image sample set into an identification model to be trained to obtain a second identification result of whether the identification model to be trained identifies the synthetic background image as an image in the background image sample set or not and a third identification result of whether the identification model to be trained identifies each background image sample in the background image sample set as an image generated by the background image generation model to be trained or not;
calculating loss information according to the first authentication result, the second authentication result and the third authentication result;
and adjusting the training parameters of the identification model to be trained according to the loss information to obtain the updated identification model to be trained, and returning to execute the steps of inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model to be trained to obtain the second identification result and the third identification result until the training parameters of the identification model to be trained are updated for N times.
Optionally, the method further comprises:
and after the training parameters of the identification model to be trained are updated for the Nth time, updating the training parameters of the background image generation model according to the loss information obtained by the (N + 1) th calculation.
Optionally, the calculating loss information according to the first authentication result, the second authentication result, and the third authentication result includes:
calculating the loss information by the following formula:
Figure BDA0003522008780000031
wherein J represents the loss information, m represents the total number of sample samples, x(i)Representing the ith background image sample, z(i)Represents the ithRandom matrix samples, G (z)(i)) Representing the ith synthetic background image, D (x)(i)) Representing the probability of identifying the ith background image sample as the image generated by the background image generation model to be trained, T (G (z)(i)) Representing the probability that the foreground object recognizer recognized the target foreground object in the ith synthetic background image, D (G (z)(i)) Means a probability of identifying the i-th composite background image as an image in the background image sample set.
Optionally, the foreground object generation model includes a network fusion module, and the inputting the second random matrix and the first sub-image into the trained foreground object generation model to obtain a target sub-image output by the foreground object generation model includes:
inputting the second random matrix and the first subimage into a network fusion module to obtain a fusion matrix obtained by fusing the second random matrix and an image matrix corresponding to the first subimage;
generating an initial target sub-image based on the fusion matrix, wherein the size of the initial target sub-image is the same as that of the first sub-image;
and fusing a first edge area sub-image in the initial target sub-image with a second edge area sub-image of the first sub-image to obtain an edge fused initial target sub-image, wherein the target sub-image represents the edge fused initial target sub-image.
Optionally, the fusing the first edge region sub-image in the initial target sub-image with the second edge region sub-image of the first sub-image includes:
and aiming at each first pixel point in the first edge area of the initial target image, calculating according to the pixel value of the first pixel point and the pixel value of a second pixel point at the same position as the first pixel point in the second edge area of the first sub-image to obtain the pixel value of a third pixel point at the same position as the first pixel point in the target sub-image.
Optionally, the method further comprises:
acquiring object type information of the target foreground object;
the at least using the frame coordinate as the labeling information of the target foreground object in the target sub-image comprises:
and taking the frame coordinates and the object type information as labeling information of the target foreground object in the target sub-image.
According to a second aspect of the embodiments of the present disclosure, there is provided an annotated image data generation apparatus, the apparatus comprising:
the first input module is used for inputting a first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first sub-image of a foreground object to be generated in the background image;
the segmentation module is used for segmenting the background image according to the frame coordinates to obtain the first sub-image and the second sub-image;
the second input module is used for inputting a second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model, wherein the size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object;
and the fusion module is used for at least taking the frame coordinates as the labeling information of the target foreground object in the target subimage and fusing the labeled target subimage and the second subimage to obtain image data with labels.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the above first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects above.
By adopting the technical scheme, the following technical effects can be at least achieved:
and inputting the first random matrix into the trained background image generation model to obtain a background image and frame coordinates output by the background image generation model. Since the frame coordinates can be used to represent the position of the first sub-image of the foreground object to be generated in the background image, the background image can be segmented according to the frame coordinates to obtain the first sub-image and the second sub-image. On the basis, the second random matrix and the first sub-image are input into the trained foreground object generation model, and the target sub-image output by the foreground object generation model is obtained. The size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object. And at least taking the frame coordinates as the labeling information of the target foreground object in the target sub-image, and fusing the labeled target sub-image and the second sub-image, thereby obtaining image data with labels. This way of generating annotated image data by a model of the present disclosure is more efficient than the way of manually annotating data in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flowchart illustrating a method for generating annotated image data according to an exemplary embodiment of the present disclosure.
FIG. 2 is a block diagram illustrating an annotated image data generation apparatus according to an exemplary embodiment of the present disclosure.
FIG. 3 is a block diagram of an electronic device shown in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In the related art, a Generic Adaptive Network (GAN) is an unsupervised network architecture, which includes two independent Networks, namely an authentication network and a generation network, compared to a conventional neural network model. Wherein, the generation network is used for generating data, and the discrimination network is used for judging whether the data generated by the generation network is real data or false data (namely, the data generated by the generation network). In the training process of GAN, training parameters of the authentication network and/or the generation network are usually adjusted according to the authentication result of the authentication network, so that the authentication network has a strong authentication capability (i.e. the capability of correctly distinguishing real data from false data), and the generation network has a strong data generation capability (i.e. the capability of generating data that can make the authentication network identify it as real data).
The following describes detailed embodiments of a method, an apparatus, a storage medium, and an electronic device for generating image data with annotations according to embodiments of the present disclosure.
Fig. 1 is a flowchart illustrating a method for generating annotated image data according to an exemplary embodiment of the disclosure, as shown in fig. 1, the method including:
s101, inputting the first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first sub-image of a foreground object to be generated in the background image.
It should be noted that the background map generation model may generate a corresponding background image according to the input random matrix. In the present disclosure, the first random matrix is input into the trained background image generation model, so as to obtain a background image and frame coordinates output by the background image generation model. The frame coordinates are used for representing the position of a first sub-image of a foreground object to be generated in the background image. The frame coordinates may be randomly generated by a corresponding algorithm after the background image is generated, or may be generated by the background image generation model according to the input random matrix on the premise that the background image generation model is trained to generate the background image and the frame coordinates in the training process of the background image generation model.
And S102, segmenting the background image according to the frame coordinates to obtain the first sub-image and the second sub-image.
It will be appreciated that the frame coordinates may be used to characterize the position of the first sub-image of the foreground object to be generated in the background image. Therefore, the background image can be segmented according to the frame coordinates to obtain the first sub-image and the second sub-image.
For example, the coordinates of the frame are (1, 2), (1, 4), (2, 2) and (2, 4), and these four coordinates may be sequentially connected to form a frame. Then, under the condition that one side of the background image is determined to be an x-axis and the other side intersecting the side is determined to be a y-axis, the position of the frame is determined in the background image, so that the background image can be segmented according to the frame to obtain the first sub-image and the second sub-image.
S103, inputting the second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model.
The size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object.
In the training process of the foreground object generation model, the foreground object generation model may be trained to generate a target image with the same size as the input image according to the input random matrix and the input image, and the target image includes the target foreground object. On the basis, the second random matrix and the first sub-image are input into a trained foreground object generation model, a target sub-image output by the foreground object generation model can be obtained, the size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object.
If the image data with the label is labeled by an animal, the target foreground object may be any animal. If the annotated image data is annotated as a vehicle, the target foreground object may be any vehicle. The present disclosure is not particularly limited thereto.
S104, at least using the frame coordinate as the labeling information of the target foreground object in the target sub-image, and fusing the labeled target sub-image and the second sub-image to obtain image data with labels.
It can be understood that the frame coordinates may be used to characterize the position of the first sub-image of the foreground object to be generated in the background image, and the target sub-image is an image with the same size as the first sub-image obtained from the foreground object generation model trained by inputting the first sub-image and the second random matrix. Therefore, the frame coordinates can be used as the labeling information of the target foreground object in the target sub-image.
In the disclosure, at least the frame coordinates are used as labeling information of a target foreground object in a target sub-image, and the labeled target sub-image and a second sub-image are fused to obtain image data with a label. The merging of the labeled target sub-image and the second sub-image may refer to merging an outer frame of the target sub-image and an inner frame of the second sub-image. It is understood that the inner frame of the second sub-picture is generated after the background picture is divided according to the frame coordinates, and the inner frame can be matched with the outer frame of the first sub-picture.
By adopting the method, the first random matrix is input into the trained background image generation model, and the background image and the frame coordinates output by the background image generation model are obtained. Since the frame coordinates can be used to represent the position of the first sub-image of the foreground object to be generated in the background image, the background image can be segmented according to the frame coordinates to obtain the first sub-image and the second sub-image. On the basis, the second random matrix and the first sub-image are input into the trained foreground object generation model, and the target sub-image output by the foreground object generation model is obtained. The size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object. And at least taking the frame coordinates as the labeling information of the target foreground object in the target sub-image, and fusing the labeled target sub-image and the second sub-image, thereby obtaining image data with labels. This way of generating annotated image data by a model of the present disclosure is more efficient than the way of manually annotating data in the related art.
In order to make the technical solutions provided by the present disclosure more easily understood by those of ordinary skill in the art, the following detailed description is given for each of the above steps.
Optionally, the training process of the background map generation model may include:
inputting a random matrix sample into a background image generation model to be trained to obtain a synthetic background image output by the background image generation model to be trained;
inputting the synthesized background image into a foreground recognizer to obtain a first recognition result of whether a target foreground object exists in the synthesized background image or not, wherein the first recognition result is output by the foreground recognizer;
inputting the random matrix sample, the synthesized background image and a background image sample set into an identification model to be trained to obtain a second identification result of whether the identification model to be trained identifies the synthesized background image as an image in the background image sample set and a third identification result of whether the identification model to be trained identifies each background image sample in the background image sample set as an image generated by the background image generation model to be trained;
calculating loss information according to the first discrimination result, the second discrimination result and the third discrimination result;
and adjusting the training parameters of the identification model to be trained according to the loss information to obtain the updated identification model to be trained, and returning to the step of inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model to be trained to obtain the second identification result and the third identification result until the training parameters of the identification model to be trained are updated for N times.
Optionally, the method provided by the embodiment of the present disclosure may further include:
and after the training parameters of the identification model to be trained are updated for the nth time, updating the training parameters of the background image generation model according to the loss information obtained by the (N + 1) th calculation.
It should be noted that the network architecture of the background map generation model in the present disclosure during the training process may be similar to that of GAN, that is, during the training process of the background map generation model, the authentication model and the background map generation model may be involved.
In the training process of the background image generation model, the random matrix sample can be input into the background image generation model to be trained, and a synthetic background image output by the background image generation model to be trained is obtained. In order to prevent the generated synthesized background image from including the target foreground object and omit labeling of the target foreground object in the synthesized background image, a foreground recognizer may be added after the synthesized background image is generated, and the identification result of the foreground recognizer is incorporated into the calculation of the model loss information, so as to perform parameter adjustment. Specifically, the synthesized background image is input to a foreground recognizer to obtain a first recognition result output by the foreground recognizer, wherein the first recognition result is used for judging whether a target foreground object exists in the synthesized background image or not. The foreground recognizer may be configured to recognize whether a target foreground object exists in the synthesized background image, and an algorithm used by the foreground recognizer may be a yolo (young Only Look once) algorithm or the like. The foreground recognizer can recognize a target foreground object in the image and output the frame coordinates of the target foreground object and the belonging category of the target foreground object, so that whether the target foreground object exists in the synthetic background image can be determined according to the belonging category of the target foreground object output by the foreground recognizer. For example, if the annotated image data is annotated as a feline, it is determined that the target foreground object exists in the synthesized background image when the class of the target foreground object output by the foreground recognizer represents that the target foreground object belongs to the feline.
In the GAN training process, the parameters of the generated network are usually fixed, and the parameters of the identified network are adjusted according to the judgment result of the identified network. After the authentication network has a certain authentication capability (i.e., capability of correctly distinguishing real data from data generated by the generation network) through the training process, parameters of the authentication network are fixed, and the parameters of the generation network are adjusted according to a judgment result of the authentication network, so that the generation network has a certain data generation capability (i.e., capability of the generation network to distinguish the generated data as real data). And repeating the iteration, thereby training a generation network with stronger data generation capacity. On the basis, under the condition of fixing the training parameters of the to-be-trained background image generation model, the random matrix sample, the synthetic background image and the background image sample set (the background image sample set can correspond to real data) are input into the to-be-trained identification model to obtain a second identification result of whether the to-be-trained identification model identifies the synthetic background image as an image in the background image sample set and obtain a third identification result of whether the to-be-trained identification model identifies each background image sample in the background image sample set as an image generated by the to-be-trained background image generation model, and loss information is calculated according to the first identification result, the second identification result and the third identification result, so that the training parameters of the to-be-trained identification model can be adjusted according to the loss information to obtain an updated to-be-trained identification model.
It should be noted that, in order to make the identification model have better identification capability, the steps of inputting the random matrix sample into the background image generation model to be trained, inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model to be trained after obtaining the synthetic background image and the first identification result of the foreground recognizer for the synthetic background image to obtain the second identification result and the third identification result, and updating the training parameters of the identification model to be trained according to the first identification result, the second identification result and the third identification result may be performed until the training parameters of the identification model to be trained are updated N times. Wherein N may be a constant greater than or equal to 1.
It is understood that after the nth updating of the training parameters of the authentication model to be trained, the authentication model to be trained can be adjusted to an authentication model with certain authentication capability. On the basis, the training parameters of the background image generation model to be trained can be updated according to the loss information obtained by the (N + 1) th calculation under the condition of fixing the training parameters of the identification model. And returning to the step of inputting the random matrix sample into the background image generation model to be trained, inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model after obtaining the synthetic background image and the first identification result of the foreground recognizer for the synthetic background image to obtain a second identification result and a third identification result, and updating the training parameters of the background generation model to be trained according to the first identification result, the second identification result and the third identification result until updating the training parameters of the background image generation model to be trained for N times. Thus, a background map generation model having a certain data generation capability can be obtained. And then, after the processes of updating the training parameters of the identification model to be trained for N times and updating the training parameters of the background image generation model to be trained for N times are executed through multiple iterations, the background image generation model with strong data generation capacity can be trained.
It should be noted that, in the related art, only the synthetic data and the real data for generating the network output are generally input to the authentication network, and thus the parameter adjustment is performed by the authentication result of the authentication network. In the present disclosure, the random matrix sample, the synthesized background image and the background image sample set are input into the identification model, and the parameters are adjusted according to the identification result of the identification model. This is because, in the case of generating network output image data, the feature information of the image data is usually relatively redundant, where the redundancy may be spatial redundancy (i.e., redundancy caused by strong correlation between adjacent pixels in an image), and the redundant feature information is not favorable for convergence of the model. In this case, the random matrix sample, the synthetic background image and the background image sample set can be used as the input of the identification model together, so that the redundancy of the input data can be reduced, and the training effect of the model can be improved.
Optionally, calculating the loss information according to the first authentication result, the second authentication result, and the third authentication result includes:
the loss information is calculated by the following formula:
Figure BDA0003522008780000121
where J represents the loss information, m represents the total number of sample samples, x(i)Representing the ith background image sample, z(i)Denotes the ith random matrix sample, G (z)(i)) Representing the ith synthetic background image, D (x)(i)) Representing the probability of identifying the ith background image sample as the image generated by the background image generation model to be trained, T (G (z)(i)) Denotes the probability that the foreground object recognizer recognized the target foreground object in the ith synthetic background image, D (G (z))(i)) Means a probability of identifying the i-th composite background image as an image in the background image sample set.
The total number of sample samples may be less than or equal to the number of random matrix samples of the input background map generation model to be trained.
It should be noted that, since loss information is usually very small, the loss information calculated when the training parameter of the to-be-trained identification model is adjusted under the condition of fixing the training parameter of the to-be-trained background map generation model or the training parameter of the to-be-trained background map generation model is adjusted under the condition of fixing the training parameter of the identification model may both adopt J described above. In order to further improve the model accuracy of the background map generation model, the loss information calculated when the training parameters of the background map generation model to be trained are adjusted under the condition of fixing the training parameters of the identification model can be calculated by the following formula obtained by simplifying the formula:
Figure BDA0003522008780000122
wherein, J1Representing the reduced loss information. It can be understood that D (x)(i)) The probability of identifying the ith background image sample as the image generated by the background image generation model to be trained is shown, and the identification result is independent of the model performance of the background image generation model, so that the method can be simplified.
Optionally, the foreground object generation model includes a network fusion module, and on this basis, the step S103 may include:
inputting a second random matrix and a first subimage into a network fusion module to obtain a fusion matrix obtained by fusing the second random matrix and an image matrix corresponding to the first subimage;
generating an initial target sub-image based on the fusion matrix, wherein the size of the initial target sub-image is the same as that of the first sub-image;
and fusing a first edge area sub-image in the initial target sub-image with a second edge area sub-image of the first sub-image to obtain an edge fused initial target sub-image, wherein the target sub-image represents the edge fused initial target sub-image.
For example, the second random matrix and the first sub-image may be input into the network fusion module to obtain a fusion matrix obtained by fusing the second random matrix and the image matrix corresponding to the first sub-image. And fusing the second random matrix and the image matrix corresponding to the first sub-image in a matrix addition or matrix connection mode. For example, if the second random matrix is [32,112,112] and the image matrix corresponding to the first sub-image is [32,112,112], the resulting fusion matrix is [32,112,112] when the second random matrix is added to the image matrix corresponding to the first sub-image, and the resulting fusion matrix is [64,112,112] when the second random matrix is connected to the image matrix corresponding to the first sub-image.
On the basis, an initial target sub-image is generated according to the obtained fusion matrix. The size of the initial target sub-image is the same as the size of the first sub-image. In order to make the fusion of the target sub-image and the second sub-image more natural, the first edge region sub-image in the initial target sub-image and the second edge region sub-image in the first sub-image may be fused to obtain an initial target sub-image after edge fusion. The target sub-image may represent the initial target sub-image after the edge fusion. The edge area may be an area obtained by multiplying the length and width of the image by a preset range. The preset range should be set small to avoid the operation of edge fusion from affecting the target foreground object generated in the target sub-image. For example, the image has a length of 10 and a width of 5, and the preset range may be set to 5%, in which case the width of the edge region of any long side of the image is 0.25 and the width of the edge region of any wide side of the image is 0.5.
Optionally, fusing the first edge region sub-image in the initial target sub-image with the second edge region sub-image of the first sub-image, including:
and aiming at each first pixel point in the first edge region of the initial target image, calculating according to the pixel value of the first pixel point and the pixel value of a second pixel point at the same position as the first pixel point in the second edge region of the first sub-image to obtain the pixel value of a third pixel point at the same position as the first pixel point in the target sub-image.
The method for fusing the first edge region sub-image in the initial target sub-image and the second edge region sub-image in the first sub-image may be that, under the condition that corresponding weight values are set for a first pixel point in the first edge region sub-image and a second pixel point in the second edge region, the pixel value of the first pixel point is multiplied by the value of the first weight value, the pixel value of the second pixel point at the same position as the first pixel point is added by the value of the second weight value, and the finally obtained sum is determined as the pixel value of a third pixel point at the same position as the first pixel point in the target sub-image. Since the size of the initial target sub-image is the same as the size of the first sub-image, the size of the first edge region may also be the same as the size of the second edge region. The first weight value and the second weight value may be set according to actual conditions, which is not specifically limited in this disclosure.
For example, the pixel value of the first pixel is 50, the pixel value of the second pixel at the same position as the first pixel is 60, the first weight value is 0.3, and the second weight value is 0.7. In this case, the pixel value of the third pixel at the same position as the first pixel in the target sub-image is 50 × 0.3+60 × 0.7 — 57.
Optionally, the method provided by the embodiment of the present disclosure may further include:
acquiring object type information of a target foreground object;
on this basis, at least taking the frame coordinates as the labeling information of the target foreground object in the target sub-image may include:
and taking the frame coordinates and the object type information as labeling information of the target foreground object in the target sub-image.
It should be noted that the annotation information may also include object type information of the target foreground object. The object type information relates to a set of real image samples used in a training process of a foreground object generating model, the set of real image samples including a real foreground object. It is understood that the network architecture in the foreground object generation model training process may be similar to that of GAN, that is, the foreground object identification model and the foreground object generation model may be involved in the training process of the foreground object generation model. The method comprises the steps of inputting a real image sample set and a foreground object generation model to be trained into a foreground object identification model to be trained according to an input second random matrix sample and a target subimage sample generated by a first subimage to obtain a corresponding identification result, and sequentially adjusting training parameters of the foreground object identification model to be trained and the foreground object generation model to be trained according to a training process similar to a background image generation model to obtain a foreground object generation model capable of generating a target subimage, wherein the target subimage comprises a target foreground object. In a possible implementation, the object type information of the target foreground object may be the same as the object type information of the real foreground object in the real image sample set. In this case, the object type information of the target foreground object may be determined according to the object type information of the real foreground object.
On the basis, the frame coordinates and the object type information can be used as labeling information of the target foreground object in the target sub-image.
It should be noted that, in the related art, the data may be pre-labeled by using a trained model, or the labeled data may be manufactured by splicing. However, the former pre-labeled data still needs manual screening, which is inefficient, and the latter generates labeled data that is hard and hard, which cannot ensure the quality of the labeled data. The background image is segmented to obtain the first sub-image and the second sub-image, the target sub-image is generated based on the first sub-image, the target sub-image comprises the target foreground object, and therefore more natural image data with labels can be obtained after the target sub-image is fused to the second sub-image. That is to say, the technical scheme provided by the embodiment of the present disclosure can generate more natural image data with annotations without manually labeling data or manually screening data.
By adopting the method, the first random matrix is input into the trained background image generation model, and the background image and the frame coordinates output by the background image generation model are obtained. Since the frame coordinates can be used to represent the position of the first sub-image of the foreground object to be generated in the background image, the background image can be segmented according to the frame coordinates to obtain the first sub-image and the second sub-image. On the basis, the second random matrix and the first sub-image are input into the trained foreground object generation model, and the target sub-image output by the foreground object generation model is obtained. The size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object. And at least taking the frame coordinates as the labeling information of the target foreground object in the target sub-image, and fusing the labeled target sub-image and the second sub-image, thereby obtaining image data with labels. This way of generating annotated image data by a model of the present disclosure is more efficient than the way of manually annotating data in the related art.
Based on the same inventive concept, the present disclosure further provides an annotated image data generating apparatus, referring to fig. 2, fig. 2 is a block diagram of an annotated image data generating apparatus according to an exemplary embodiment of the present disclosure, and as shown in fig. 2, the annotated image data generating apparatus 100 includes:
a first input module 101, configured to input a first random matrix into a trained background image generation model, to obtain a background image and frame coordinates output by the background image generation model, where the frame coordinates are used to represent a position of a first sub-image of a foreground object to be generated in the background image;
a segmentation module 102, configured to segment the background image according to the frame coordinates to obtain the first sub-image and the second sub-image;
a second input module 103, configured to input a second random matrix and the first sub-image into a trained foreground object generation model, and obtain a target sub-image output by the foreground object generation model, where a size of the target sub-image is the same as a size of the first sub-image, and the target sub-image includes a target foreground object;
and the fusion module 104 is configured to at least use the frame coordinates as labeling information of the target foreground object in the target sub-image, and fuse the labeled target sub-image with the second sub-image to obtain image data with a label.
By adopting the device, the first random matrix is input into the trained background image generation model, and the background image and the frame coordinates output by the background image generation model are obtained. Since the frame coordinates can be used to represent the position of the first sub-image of the foreground object to be generated in the background image, the background image can be segmented according to the frame coordinates to obtain the first sub-image and the second sub-image. On the basis, the second random matrix and the first sub-image are input into the trained foreground object generation model, and the target sub-image output by the foreground object generation model is obtained. The size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object. And at least taking the frame coordinate as the labeling information of the target foreground object in the target sub-image, and fusing the labeled target sub-image and the second sub-image, thereby obtaining image data with labels. Such an apparatus for generating annotated image data via a model according to the present disclosure is more efficient than the way data is annotated manually in the related art.
Optionally, the apparatus 100 further comprises a training module configured to:
inputting a random matrix sample into a background image generation model to be trained to obtain a synthetic background image output by the background image generation model to be trained;
inputting the synthesized background image into a foreground recognizer to obtain a first recognition result of whether a target foreground object exists in the synthesized background image or not, wherein the first recognition result is output by the foreground recognizer;
inputting the random matrix sample, the synthetic background image and a background image sample set into an identification model to be trained to obtain a second identification result of whether the identification model to be trained identifies the synthetic background image as an image in the background image sample set and a third identification result of whether the identification model to be trained identifies each background image sample in the background image sample set as an image generated by the background image generation model to be trained;
calculating loss information according to the first authentication result, the second authentication result and the third authentication result;
and adjusting the training parameters of the identification model to be trained according to the loss information to obtain the updated identification model to be trained, and returning to execute the steps of inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model to be trained to obtain the second identification result and the third identification result until the training parameters of the identification model to be trained are updated for N times.
Optionally, the apparatus 100 further comprises:
and the updating module is used for updating the training parameters of the background image generation model according to the loss information obtained by the (N + 1) th calculation after the training parameters of the identification model to be trained are updated for the Nth time.
Optionally, the training module is further configured to:
calculating the loss information by the following formula:
Figure BDA0003522008780000181
wherein J represents the loss information, m represents the total number of sample samples, x(i)Representing the ith background image sample, z(i)Denotes the ith random matrix sample, G (z)(i)) Representing the ith synthetic background image, D (x)(i)) Representing the probability of identifying the ith background image sample as the image generated by the background image generation model to be trained, T (G (z)(i)) Representing the probability that the foreground object recognizer recognized the target foreground object in the ith synthetic background image, D (G (z)(i)) Means a probability of identifying the i-th composite background image as an image in the background image sample set.
Optionally, the foreground object generation model includes a network fusion module, and the second input module 103 is further configured to:
inputting the second random matrix and the first subimage into a network fusion module to obtain a fusion matrix obtained by fusing the second random matrix and an image matrix corresponding to the first subimage;
generating an initial target sub-image based on the fusion matrix, wherein the size of the initial target sub-image is the same as that of the first sub-image;
and fusing a first edge area sub-image in the initial target sub-image with a second edge area sub-image of the first sub-image to obtain an edge fused initial target sub-image, wherein the target sub-image represents the edge fused initial target sub-image.
Optionally, the apparatus 100 further comprises:
and the calculation module is used for calculating the pixel value of a third pixel point at the same position as the first pixel point in the target sub-image according to the pixel value of the first pixel point and the pixel value of a second pixel point at the same position as the first pixel point in the second edge region of the first sub-image aiming at each first pixel point in the first edge region of the initial target image.
Optionally, the apparatus 100 further comprises:
the acquisition module is used for acquiring the object type information of the target foreground object;
the fusion module 104 is further configured to:
and taking the frame coordinates and the object type information as labeling information of the target foreground object in the target sub-image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same inventive concept, embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements the steps of the above-mentioned labeled image data generation method.
Specifically, the computer-readable storage medium may be a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, a public cloud server, etc.
With regard to the computer-readable storage medium in the above-described embodiments, the steps of the method for generating image data with annotations when the computer program stored thereon is executed will be described in detail in relation to the embodiment of the method, and will not be elaborated upon here.
Based on the same inventive concept, an embodiment of the present disclosure further provides an electronic device, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the above-described annotated image data generation method.
Fig. 3 is a block diagram illustrating an electronic device 200 according to an example embodiment. As shown in fig. 3, the electronic device 200 may include: a processor 201 and a memory 202. The electronic device 200 may also include one or more of a multimedia component 203, an input/output (I/O) interface 204, and a communication component 205.
The processor 201 is configured to control the overall operation of the electronic device 200, so as to complete all or part of the steps in the above-mentioned labeled image data generation method. The memory 202 is used to store various types of data to support operation at the electronic device 200, such as instructions for any application or method operating on the electronic device 200 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 202 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 203 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 202 or transmitted through the communication component 205. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 204 provides an interface between the processor 201 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 205 is used for wired or wireless communication between the electronic device 200 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G or 5G, NB-IOT (Narrow Band Internet of Things), or a combination of one or more of them, so that the corresponding Communication component 205 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described labeled image data generating method.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method for generating annotated image data, the method comprising:
inputting a first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first subimage of a foreground object to be generated in the background image;
segmenting the background image according to the frame coordinates to obtain a first sub-image and a second sub-image;
inputting a second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model, wherein the size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object;
and at least taking the frame coordinates as the labeling information of the target foreground object in the target subimage, and fusing the labeled target subimage with the second subimage to obtain image data with labels.
2. The method of claim 1, wherein the training process of the background map generation model comprises:
inputting a random matrix sample into a background image generation model to be trained to obtain a synthesized background image output by the background image generation model to be trained;
inputting the synthesized background image into a foreground recognizer to obtain a first recognition result of whether a target foreground object exists in the synthesized background image or not, wherein the first recognition result is output by the foreground recognizer;
inputting the random matrix sample, the synthetic background image and a background image sample set into an identification model to be trained to obtain a second identification result of whether the identification model to be trained identifies the synthetic background image as an image in the background image sample set and a third identification result of whether the identification model to be trained identifies each background image sample in the background image sample set as an image generated by the background image generation model to be trained;
calculating loss information according to the first authentication result, the second authentication result and the third authentication result;
and adjusting the training parameters of the identification model to be trained according to the loss information to obtain the updated identification model to be trained, and returning to execute the steps of inputting the random matrix sample, the synthetic background image and the background image sample set into the identification model to be trained to obtain the second identification result and the third identification result until the training parameters of the identification model to be trained are updated for N times.
3. The method of claim 2, further comprising:
and after the training parameters of the identification model to be trained are updated for the Nth time, updating the training parameters of the background image generation model according to the loss information obtained by the (N + 1) th calculation.
4. The method of claim 2, wherein calculating loss information based on the first authentication result, the second authentication result, and the third authentication result comprises:
calculating the loss information by the following formula:
Figure FDA0003522008770000021
wherein J represents the loss information, m represents the total number of sample samples, x(i)Denotes the firsti background image samples, z(i)Denotes the ith random matrix sample, G (z)(i)) Representing the ith synthetic background image, D (x)(i)) Representing the probability of identifying the ith background image sample as the image generated by the background image generation model to be trained, T (G (z)(i)) Representing the probability that the foreground object recognizer recognized the target foreground object in the ith synthetic background image, D (G (z)(i)) Means a probability of identifying the i-th composite background image as an image in the background image sample set.
5. The method of claim 1, wherein the foreground object generation model comprises a network fusion module, and the inputting the second random matrix and the first sub-image into the trained foreground object generation model to obtain the target sub-image output by the foreground object generation model comprises:
inputting the second random matrix and the first subimage into a network fusion module to obtain a fusion matrix obtained by fusing the second random matrix and an image matrix corresponding to the first subimage;
generating an initial target sub-image based on the fusion matrix, wherein the size of the initial target sub-image is the same as that of the first sub-image;
and fusing a first edge area sub-image in the initial target sub-image with a second edge area sub-image of the first sub-image to obtain an edge fused initial target sub-image, wherein the target sub-image represents the edge fused initial target sub-image.
6. The method of claim 5, wherein fusing the first edge region sub-image of the initial target sub-image with the second edge region sub-image of the first sub-image comprises:
and aiming at each first pixel point in a first edge region of the initial target image, calculating according to the pixel value of the first pixel point and the pixel value of a second pixel point at the same position as the first pixel point in a second edge region of the first sub-image to obtain the pixel value of a third pixel point at the same position as the first pixel point in the target sub-image.
7. The method of claim 1, further comprising:
acquiring object type information of the target foreground object;
the at least using the frame coordinates as labeling information of the target foreground object in the target sub-image includes:
and taking the frame coordinates and the object type information as labeling information of the target foreground object in the target sub-image.
8. An apparatus for generating annotated image data, the apparatus comprising:
the first input module is used for inputting a first random matrix into a trained background image generation model to obtain a background image and frame coordinates output by the background image generation model, wherein the frame coordinates are used for representing the position of a first sub-image of a foreground object to be generated in the background image;
the segmentation module is used for segmenting the background image according to the frame coordinates to obtain the first sub-image and the second sub-image;
the second input module is used for inputting a second random matrix and the first sub-image into a trained foreground object generation model to obtain a target sub-image output by the foreground object generation model, wherein the size of the target sub-image is the same as that of the first sub-image, and the target sub-image comprises a target foreground object;
and the fusion module is used for at least taking the frame coordinates as the labeling information of the target foreground object in the target subimage and fusing the labeled target subimage and the second subimage to obtain image data with labels.
9. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202210179838.6A 2022-02-25 2022-02-25 Image data generation method with label, device, storage medium and electronic equipment Pending CN114723646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210179838.6A CN114723646A (en) 2022-02-25 2022-02-25 Image data generation method with label, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210179838.6A CN114723646A (en) 2022-02-25 2022-02-25 Image data generation method with label, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114723646A true CN114723646A (en) 2022-07-08

Family

ID=82235797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210179838.6A Pending CN114723646A (en) 2022-02-25 2022-02-25 Image data generation method with label, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114723646A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223016A (en) * 2022-09-19 2022-10-21 苏州万店掌网络科技有限公司 Sample labeling method, device, equipment and medium
CN115311296A (en) * 2022-10-12 2022-11-08 湖南视比特机器人有限公司 Data generation method, image recognition method, computer storage medium and terminal device
CN115861739A (en) * 2023-02-08 2023-03-28 海纳云物联科技有限公司 Training method, device, equipment, storage medium and product of image segmentation model
CN116958766A (en) * 2023-07-04 2023-10-27 阿里巴巴(中国)有限公司 Image processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223016A (en) * 2022-09-19 2022-10-21 苏州万店掌网络科技有限公司 Sample labeling method, device, equipment and medium
CN115311296A (en) * 2022-10-12 2022-11-08 湖南视比特机器人有限公司 Data generation method, image recognition method, computer storage medium and terminal device
CN115861739A (en) * 2023-02-08 2023-03-28 海纳云物联科技有限公司 Training method, device, equipment, storage medium and product of image segmentation model
CN116958766A (en) * 2023-07-04 2023-10-27 阿里巴巴(中国)有限公司 Image processing method
CN116958766B (en) * 2023-07-04 2024-05-14 阿里巴巴(中国)有限公司 Image processing method and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109117831B (en) Training method and device of object detection network
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN114723646A (en) Image data generation method with label, device, storage medium and electronic equipment
CN111666960B (en) Image recognition method, device, electronic equipment and readable storage medium
CN111739027B (en) Image processing method, device, equipment and readable storage medium
KR102322773B1 (en) Method and apparatus for detecting burrs of electrode pieces
CN112380981A (en) Face key point detection method and device, storage medium and electronic equipment
CN116049397B (en) Sensitive information discovery and automatic classification method based on multi-mode fusion
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
CN111652181B (en) Target tracking method and device and electronic equipment
US11756288B2 (en) Image processing method and apparatus, electronic device and storage medium
CN110991412A (en) Face recognition method and device, storage medium and electronic equipment
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN113780469A (en) Training method, medium, device and computing equipment of image recognition model
CN116563840B (en) Scene text detection and recognition method based on weak supervision cross-mode contrast learning
CN113239883A (en) Method and device for training classification model, electronic equipment and storage medium
CN113158773A (en) Training method and training device for living body detection model
CN111914850B (en) Picture feature extraction method, device, server and medium
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN111062374A (en) Identification method, device, system, equipment and readable medium of identity card information
CN114241202A (en) Method and device for training dressing classification model and method and device for dressing classification
CN114973268A (en) Text recognition method and device, storage medium and electronic equipment
CN114757840A (en) Image processing method, image processing device, readable medium and electronic equipment
CN111353470B (en) Image processing method and device, readable medium and electronic equipment
CN113537359A (en) Training data generation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination