CN109614983A - The generation method of training data, apparatus and system - Google Patents

The generation method of training data, apparatus and system Download PDF

Info

Publication number
CN109614983A
CN109614983A CN201811260697.0A CN201811260697A CN109614983A CN 109614983 A CN109614983 A CN 109614983A CN 201811260697 A CN201811260697 A CN 201811260697A CN 109614983 A CN109614983 A CN 109614983A
Authority
CN
China
Prior art keywords
image
target sample
training data
instruction
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811260697.0A
Other languages
Chinese (zh)
Other versions
CN109614983B (en
Inventor
刘源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811260697.0A priority Critical patent/CN109614983B/en
Publication of CN109614983A publication Critical patent/CN109614983A/en
Application granted granted Critical
Publication of CN109614983B publication Critical patent/CN109614983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

This specification embodiment provides a kind of generation method of training data, apparatus and system, this method comprises: image capture instruction is sent to image capture device, so that first image of the image capture device according to the instruction acquisition target sample of image capture instruction;Its first image acquired is obtained from image capture device, the background image of the first image is then replaced using setting background image, obtains the second image of target sample;Finally, being labeled by data dimensioning algorithm to the target sample in the second image, training data is obtained;Wherein, above-mentioned background image is the region in the first image in addition to target sample.

Description

The generation method of training data, apparatus and system
Technical field
This application involves technical field of image processing more particularly to a kind of generation methods of training data, apparatus and system.
Background technique
With the fast development of computer technology, machine learning is also widely used in every field.For example, Unmanned retail domain can be known with scheme based on computer vision using the commodity that image recognition technology buys user Not, to complete the purchase of commodity, payment process.Currently, the mainstream of image recognition is then based on convolutional neural networks The deep learning algorithm of (Convolutional Neural Network, CNN).
When carrying out image recognition using deep learning algorithm, generally requires and acquire a large amount of training data in advance, and make CNN model is trained with training data, and carries out image recognition using the CNN model of training.And in training CNN model, it needs Use a large amount of training data.
Therefore, need it is a kind of can quickly, the method that efficiently produces training data, to meet model training demand.
Summary of the invention
The purpose of this specification embodiment is to provide a kind of generation method of training data, apparatus and system, instructs generating When practicing data, by sending image capture instruction to image capture device, image capture device acquisition target sample can control The first image, realize the automatic collection of target sample image;In addition, also passing through data mark in this specification embodiment Note algorithm is labeled the target sample in the second image, realizes the automatic marking of target sample, improves data mark Efficiency;I.e. by this specification embodiment, the automatic training of sample data is realized, to improve the effect of data training Rate reduces human cost, and training data accuracy generated is higher.
In order to solve the above technical problems, this specification embodiment is achieved in that
This specification embodiment provides a kind of generation method of training data, comprising:
Image capture instruction is sent to image capture device, so that described image acquires equipment and refers to according to described image acquisition First image of the instruction acquisition target sample of order;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains institute State the second image of target sample;Wherein, the background image is the area in the first image in addition to the target sample Domain;
The target sample in second image is labeled by data dimensioning algorithm, obtains the training data.
This specification embodiment additionally provides a kind of generating means of training data, comprising:
First sending module, to image capture device send image capture instruction so that described image acquire equipment according to First image of the instruction acquisition target sample of described image acquisition instructions;
Module is obtained, the first image is obtained;
Replacement module obtains the target sample using the background image of setting background image replacement the first image The second image;Wherein, the background image is the region in the first image in addition to the target sample;
Labeling module is labeled the target sample in second image by data dimensioning algorithm, obtains described Training data.
This specification embodiment additionally provides a kind of generation system of training data, including image capture device and training number According to generating device, the generating device of the training data includes the generating means of the training data;
Described image acquires equipment, the image capture instruction that the generating device for receiving the training data is sent, and According to the first image of the instruction acquisition target sample of described image acquisition instructions;
The generating device of the training data, for sending image capture instruction to described image acquisition equipment;And also For obtaining the first image from described image acquisition equipment, and use the back of setting background image replacement the first image Scape image obtains the second image of the target sample;Wherein, the background image is that the target is removed in the first image Region except sample;The target sample in second image is labeled by data dimensioning algorithm, obtains the instruction Practice data.
This specification embodiment additionally provides a kind of generating device of training data, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed Manage device:
Image capture instruction is sent to image capture device, so that described image acquires equipment and refers to according to described image acquisition First image of the instruction acquisition target sample of order;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains institute State the second image of target sample;Wherein, the background image is the area in the first image in addition to the target sample Domain;
The target sample in second image is labeled by data dimensioning algorithm, obtains training data.
This specification embodiment additionally provides a kind of storage medium, described to hold for storing computer executable instructions Following below scheme is realized in row instruction when executed:
Image capture instruction is sent to image capture device, so that described image acquires equipment and refers to according to described image acquisition First image of the instruction acquisition target sample of order;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains institute State the second image of target sample;Wherein, the background image is the area in the first image in addition to the target sample Domain;
The target sample in second image is labeled by data dimensioning algorithm, obtains training data.
Technical solution in the present embodiment, when generating training data, by sending Image Acquisition to image capture device Instruction can control the first image of image capture device acquisition target sample, realize the automatic collection of target sample image; In addition, being also labeled by data dimensioning algorithm to the target sample in the second image in this specification embodiment, realize The automatic marking of target sample improves the efficiency of data mark;I.e. by this specification embodiment, sample data is realized Automatic training, thus improve data training efficiency, reduce human cost, and training data accuracy generated It is higher.
Detailed description of the invention
In order to illustrate more clearly of this specification embodiment or technical solution in the prior art, below will to embodiment or Attached drawing needed to be used in the description of the prior art is briefly described, it should be apparent that, the accompanying drawings in the following description is only Some embodiments as described in this application, for those of ordinary skill in the art, before not making the creative labor property It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is one of the method flow diagram of generation method of training data that this specification embodiment provides;
Fig. 2 is the side being labeled to target sample in the generation method of training data that this specification embodiment provides Method flow chart;
Fig. 3 is the two of the method flow diagram of the generation method for the training data that this specification embodiment provides;
Fig. 4 is the three of the method flow diagram of the generation method for the training data that this specification embodiment provides;
Fig. 5 is the module composition schematic diagram of the generating means for the training data that this specification embodiment provides;
Fig. 6 is one of the structural schematic diagram of the generation system for the training data that this specification embodiment provides;
Fig. 7 is the second structural representation of the generation system for the training data that this specification embodiment provides;
Fig. 8 is the structural schematic diagram of the generating device for the training data that this specification embodiment provides.
Specific embodiment
In order to make those skilled in the art better understand the technical solutions in the application, below in conjunction with this specification Attached drawing in embodiment is clearly and completely described the technical solution in this specification embodiment, it is clear that described Embodiment is merely a part but not all of the embodiments of the present application.Based on the embodiment in the application, this field Those of ordinary skill's every other embodiment obtained without creative efforts, all should belong to the application The range of protection.
This specification embodiment provides generation method, device, system and the storage medium of a kind of training data, Ke Yishi The automatic collection of existing training data, automatic marking, and then realize automatically generating for training data;The life of training data can be improved At efficiency and the accuracy of training data generated.
A kind of concrete application field of this specification embodiment institute providing method can be unmanned retail domain, nobody zero Field is sold, in order to realize the transaction of commodity, the commodity bought to user is needed to identify.Specifically, image can be used The identification of identification model realization commodity, it is therefore desirable to preparatory training image identification model.And in training image identification model, then It needs to generate training data.
Fig. 1 is one of the method flow diagram of generation method of training data that this specification embodiment provides, shown in FIG. 1 Method includes at least following steps:
Step 102, image capture instruction is sent to image capture device, so that image capture device refers to according to Image Acquisition First image of the instruction acquisition target object of order.
This specification embodiment provide method executing subject be training data generating device, as computer, mobile phone, The equipment that tablet computer etc. has image processing function, specifically, can be the instruction being mounted in the generating device of training data Practice the generating means of data.
Specifically, communication connection is established between the generating device and image capture device of training data, when needing to carry out mesh When the Image Acquisition of standard specimen sheet, the generating device of training data sends image capture instruction, Image Acquisition to image capture device After equipment receives the image capture instruction, according to the image of the instruction acquisition target object of the image capture instruction, it is denoted as the One image.
Wherein, in the specific implementation, above-mentioned image capture instruction can be the generating device institute that user passes through training data Triggering, it can also be and send image capture instruction from trend image capture device according to setpoint frequency.This specification is not to upper The triggering mode for stating image capture instruction is defined.
In this specification embodiment, above-mentioned target sample can be model, animal or plant specimen or the quotient sold Any objects such as product, particular content representated by above-mentioned target sample can be determined according to practical application scene, this specification Embodiment is not defined the contents such as the specific category of target sample, title.
Specifically, may need to acquire the target from multiple visual angles in some cases due to being directed to a target sample The image of sample.Therefore, in the specific implementation, it can be first figure that target sample has been acquired by image capture device As after, just first image of account by acquisition is sent to image capture device, so that image capture device is generated for this The training data of one image;Certainly, in other specific embodiments, after the image that a target sample can have been acquired, then The training data for being directed to target sample is generated by the generating device of training data.
Step 104, the first image is obtained, and replaces the background image of the first image using setting background image, obtains mesh Second image of standard specimen sheet;Wherein, background image is the region in the first image in addition to target sample.
Wherein, at step 104, the first image that image capture device is sent is received, due in acquisition target sample When the first image, content collected includes the background area that target sample and target sample are presently in, and therefore, can be incited somebody to action Region corresponding to target sample in first image is denoted as foreground image, and region corresponding to background is denoted as background image.
And in specific application, target sample may be placed in different scenes, at this moment locating for target sample Background can then change.Because in order to further increase the recognition accuracy of trained image recognition model, in this explanation In book embodiment, after getting the first image of target sample, needs to be implemented and replace the first image using setting background image Background image the step of.
Wherein, above-mentioned setting background image is then background locating for target sample in practical application scene.For example, above-mentioned mesh Standard specimen sheet is the commodity for needing to be sold in self-service machine, and when carrying out commodity identification, the background of commodity collected For the corresponding background in commodity placement location place in self-service machine, therefore, in order to further increase the accurate of training data Property, so that the resolution of trained image recognition model is improved, it can be by the background in the first image of commodity collected Image replaces with the image of background corresponding to the commodity in self-service machine.
Step 106, the target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
In above-mentioned steps 106, the target sample in the second image is labeled, actually to target sample The attribute informations such as title, the classification of position and target sample in two images are labeled.It certainly, in addition to this can also be right The other information of target sample is labeled, and this specification embodiment is defined not to this.
Specifically, in step 106, being labeled to position of the target sample in the second image, can be and mark out Pixel corresponding to target sample, so that the precision of the mark of target sample is higher.
In this specification embodiment, then by the attribute informations such as the title of target sample, classification, and the target sample of mark Labeled data of the pixel point image as target sample corresponding to this, using the second image and corresponding labeled data as instruction Practice data.
In this specification embodiment, by sending image capture instruction to image capture device, it can control image and adopt The first image for collecting equipment acquisition target sample, realizes the automatic collection of target sample image;In addition, implementing in this specification In example, also the target sample in the second image is labeled by data dimensioning algorithm, realizes the automatic mark of target sample Note improves the efficiency of data mark;To realize automatically generating for training data, the generation effect of training data is improved Rate reduces human cost, and the accuracy of training data generated is higher.
The generation method of training data provided by this specification embodiment for ease of understanding, it is following to be discussed in detail one by one The specific implementation process of above steps.
In this specification embodiment, above-mentioned image capture device can be taken the photograph for (Red, Green, Blue, RGB) RGB As head, correspondingly, the first image collected is then RGB image;Alternatively, in another embodiment, in order to obtain mesh The steric information of standard specimen sheet increases the dimensional information of training data, and in this specification embodiment, above-mentioned image capture device is removed Include outside RGB camera, can also include depth camera, for acquiring the depth image of target sample, in this kind of situation Under, the first image of target sample collected includes RGB image and depth image.And it is directed to above two different situation, Using the specific implementation process of the background image of setting background image the first image of replacement, there are difference in step 104, following to incite somebody to action The specific implementation process of above-mentioned steps 104 is discussed respectively.
The first situation,
Above-mentioned image capture device only includes RGB camera, correspondingly, above-mentioned first image is RGB image;
In the case of this kind, at step 104, the background image of the first image is replaced using setting background image, obtains mesh Second image of standard specimen sheet includes the following steps (1) and step (2);
Step (1), the foreground image for extracting the first image;Wherein, foreground image is region corresponding to target sample;
Step (2) synthesizes above-mentioned foreground image and setting background image, obtains the second image.
In this specification embodiment, the foreground image in the first image can be extracted by way of exposure mask (mask), Detailed process is as follows:
Image gray processing, binaryzation are carried out to the first image, extract the profile of target sample in the first image;It is one newly-built With the equal-sized mask image of the first image, for example, if the size of the first image be 640*480, built mask figure The size of picture is also 640*480, wherein 640 and 480 be the number of pixel;By all pictures on newly-built mask image The pixel value of vegetarian refreshments is initialized as 0, and at this moment mask image is then a completely black image.
In mask image, area-of-interest is irised out using the profile of the target sample of said extracted, and by region of interest The pixel value of pixel in domain is disposed as 255, in this way, being then white area in area-of-interest;I.e. on mask image, The pixel value of pixel in area-of-interest be it is non-zero, the pixel value of the pixel of regions of non-interest is 0, by mask image Upper each pixel is carried out with pixel corresponding on the first image and operation, and regions of non-interest is with the result of operation Zero, therefore target sample region (area-of-interest) is left behind in obtained image, the pixel value of remaining area pixel point is equal It is zero, as black;
Target sample region is directly finally plucked out from obtained image, using the target area plucked out as the first figure The foreground image of picture.
It is certainly, above-mentioned only to describe one of specific implementation process that foreground image is extracted from the first image, Foreground image can also be extracted from the first image by other means, this specification embodiment will not enumerate.
It should be noted that foreground image and setting background image are synthesized, can be in above-mentioned steps (2) The overlap-add region for determining foreground image in setting background image now, is then superimposed upon setting background image for foreground image Overlap-add region.
Second situation,
Above-mentioned image capture device includes RGB camera and depth camera, acquires target sample by RGB camera RGB image acquires the depth image of target sample by depth camera, and therefore, the first image of target sample includes RGB figure Picture and depth image;
In the case of this kind, at step 104, the background image of the first image is replaced using setting background image, obtains mesh Second image of standard specimen sheet, specifically comprises the following steps (one), step (2) and step (3);
Step (1), the foreground image for extracting the first image;Wherein, foreground image is region corresponding to target sample;
Step (2), according to above-mentioned foreground image and depth image, generate virtual visual point image corresponding to foreground image;
Step (3) synthesizes above-mentioned virtual visual point image and setting background image, obtains the second image.
Wherein, the specific implementation process of above-mentioned steps (one) can refer to the specific implementation of step (1) in the first situation Journey, details are not described herein again.
In above-mentioned steps (two), according to foreground image and depth image, virtual view corresponding to foreground image is generated Image specifically includes following process:
Foreground image is projected according to depth image, foreground image is projected in world coordinate system, prospect is obtained Projection coordinate of the image in world coordinate system;Then the projection coordinate further according to foreground image in world coordinate system, projection Into virtual image plane, to obtain virtual visual point image of the foreground image in virtual projection plane.
Certainly, above-mentioned that the mode of virtual visual point image not office is generated according to flat image (foreground image) and depth image It is limited to this, can also realizes by other means, this specification embodiment will not enumerate.
In this specification embodiment, by the depth image and the available target sample of RGB image that acquire target sample The information of this more various dimensions, so that the data information of training data generated is more.
Specifically, as shown in Fig. 2, in above-mentioned steps 106, by data dimensioning algorithm to the target sample in the second image Originally it is labeled, obtains training data, specifically comprise the following steps:
Step 1062, the attribute tags of target sample are determined;Wherein, which includes at least the sample of target sample Title;
Step 1064, by carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate Pixel corresponding to target sample marks image;
Step 1066, the second image, attribute tags and pixel mark image are determined as training data.
It should be noted that the attribute tags of target sample refer to each of target sample in this specification embodiment Attribute information, for example, can be the life of the classification, the title of target sample, the price of target sample, target sample of target sample The information such as the date of manufacture of the place of production and target sample.
Wherein, in above-mentioned steps 1062, the attribute tags of target sample can be at least determined by the following two kinds mode:
The first, sample names, the sample class of the target sample of generating device input of the user by training data are received The information such as not, using the sample information of user's input as the attribute tags of target sample.
In the specific implementation, the target sample in the second image is labeled in the generating device execution of training data When step, can on the screen of the generating device of training data displaying target sample attribute label input frame so that user is defeated Enter the attribute information of target sample, the generating device of training data receives the association attributes letter of the target sample of user's input Breath;Alternatively, target sample attribute tags input frame can also be sent to the terminal device of user by the generating device of training data, So that user inputs the attribute information of target sample by terminal device, the generating device of training data receives user and passes through terminal The attribute information for the target sample that equipment is sent.
The second, during controlling the first image of image capture device acquisition target sample, scanning can also be controlled Equipment scans the identification code on target sample, such as bar code, identifies the information such as sample names, the classification of target sample;Right When target sample in second image is labeled, the generating device of training data obtains the sample of target sample from scanning device The information such as title, classification, the attribute tags as target sample.
Alternatively, in the specific implementation, but when image capture device is labeled to the target sample in the second image When, scan instruction is sent to the scanning device connecting with image capture device, so that the knowledge on scanning device scanning target sample Other code, and identify the attribute informations such as the sample names of target sample, classification, and the attribute information of identification is sent to Image Acquisition Equipment, the attribute tags as target sample.
Specifically, in above-mentioned steps 1064, it can be by the way of the segmentation of green curtain and/or static background segmentation to second Image is split;Belong to existing skill due to being split by the way of the segmentation of green curtain or static background segmentation to image Art, therefore details are not described herein again for its specific implementation process.
It is inconvenient due to carrying out pixel mark in the original image of the second image, and following model may be generated It has an impact, therefore, in this specification embodiment, after determining pixel corresponding to target sample, a width can be generated The second new image, and pixel corresponding to target sample is marked out in the second newly-generated image, it is denoted as target sample Corresponding pixel marks image.Wherein, it should be noted that the second newly-generated image is with former second image be it is identical, newly The purpose for generating the second image is exactly for pixel corresponding to label target sample.
Specifically, when the pixel corresponding to label target sample, it can be by the pixel circle corresponding to target sample It is set as the modes such as identical value out or by the pixel value of pixel corresponding to target sample.
By the above process, then training data corresponding to a wherein image for target sample can be generated, specifically, It can be the training data at one of visual angle of target sample.It, can be by the second image, mesh after completing target sample mark Image is corresponding is stored for the pixel of the attribute tags of standard specimen sheet and target sample mark, wherein as target sample One training data.
Specifically, after obtaining a training data of target sample, it can be first by the training data in training data Generating device is locally stored;Then, second training data for starting generation target sample, when obtaining, target sample institute is right After all training datas answered, all training datas corresponding to target sample can be uploaded to cloud, and carry out beyond the clouds Storage.
For example, in the specific implementation, it can be from the image of acquisition target sample different perspectives, and target sample is generated respectively The training data at each visual angle.If first time can acquire the front view of target sample, training corresponding to front view is being obtained After data, the generating device that training data corresponding to target sample front view is stored in training data is local;Continue to acquire The left side view of target sample, and carry out above-mentioned image processing process, obtains training data corresponding to left side view, and by mesh The generating device that training data corresponding to this left side view of standard specimen is stored in training data is local;It is further continued for target sample later The processes such as Image Acquisition, the processing of right view, after obtaining target sample and having training data corresponding to side view, by target The corresponding all training datas of sample are uploaded to cloud and are stored.
Certainly, in the specific implementation, it can also be after the training data for obtaining all target samples, then execute and will train Data are uploaded to the step of cloud is stored.
In this specification embodiment, training data is distally being stored, one side data storage safety compared with Height, furthermore it is also possible to the use of the subsequent training data of aspect.
It should be noted that in the specific implementation, after obtaining one of training data of target sample, can pass through Regulating command is sent to image capture device, so that Image Acquisition camera shooting can be to target sample by operations such as rotation, translations This other visual angles are shot;Alternatively, in the specific implementation, it is flat that target sample can also be placed on to transportable movement On platform, motion controller is provided on the motion platform, the generating device of training data is adjusted by sending to motion controller Section instruction so that motion controller control motion platform rotated, translated according to the instruction of regulating command, being risen or under Drop, so that image capture device can shoot other visual angles of target sample.
For ease of understanding, following to be illustrated citing.
For example, needing to acquire the side view of target sample, at this moment, Ke Yixiang after having acquired the front view of target sample The instruction that motion controller on motion platform sends positive hour hands or is rotated by 90 ° counterclockwise, alternatively, can also be to Image Acquisition The instruction that equipment sends positive hour hands or is rotated by 90 ° counterclockwise, so that image capture device can acquire target sample side view Figure.
In addition, in this specification embodiment, before the image of acquisition target sample, in order to which target sample can be collected The image at this setting visual angle, it is also desirable to which the position of position or image capture device to target sample is adjusted.
Either in first image of acquisition target sample, or after the acquisition of image is opened in completion first, mesh is acquired Before the image at other visual angles of standard specimen sheet, require that target sample or image capture device is adjusted.Therefore, it is holding Before row above-mentioned steps 102, the method that this specification embodiment provides further includes following steps:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second The instruction control motion platform rotation or mobile of section instruction;Wherein, above-mentioned motion platform is used for drop target sample.
It, can be with by sending regulating command to image capture device or motion controller in this specification embodiment The automatic adjustment of target sample shooting visual angle is realized, so as to realize the shooting for carrying out multi-angle of view to target sample, also, not Needing user to manually adjust image capture device or the position of target sample can be realized, simple to operate.
In addition, in the specific implementation, being sent to motion controller corresponding to image capture device or motion platform Before regulating command, the current position of target sample can detecte, to determine the need for image capture device or movement Motion controller corresponding to platform sends regulating command, and determines the particular content of regulating command.
Specifically, can pass through in detection target sample at the current position of detection target sample and preset key point The detection of position progress target specimen location.For example, target sample is filling cola, then several passes can be selected on laughable tank Image capture device is directed at laughable tank, detects the preview whether preset several key points are located at image capture device by key point Shoot the predetermined position on interface.
In addition, in order to improve the data volume of target sample, the diversity for increasing target sample data, improve trained mould The robustness of type, therefore before executing above-mentioned steps 106, the method that this specification embodiment provides further includes following steps:
Data enhancing processing is carried out to the second image.
Specifically, above-mentioned carry out data enhancing processing to the second image, actually the second image is rotated, is translated Deng operation, or the second image is amplified, is reduced, the operation such as color jitter, to obtain multiple second figures of target sample Picture can be labeled multiple second images, in subsequent step 106 so that instruction corresponding to target sample respectively Practicing data has diversity.
In addition, in this specification embodiment, in order to enable in the first image that shooting obtains the effect of target sample with Relatively, before executing above-mentioned steps 104, the method that this specification embodiment provides further includes realistic objective sample effect Following steps:
Image preprocessing is carried out to the first image.
Specifically, it is above-mentioned to the first image carry out image preprocessing specifically can be the resolution ratio to the first image, brightness, The parameters such as color are adjusted, so that the effect of target object and realistic objective object is closer in the first image.
In addition, in this specification embodiment, before the image of acquisition target sample, it is also necessary to image capture device Parameter, illumination parameter, the kinematic parameter of motion platform etc. be configured, can specifically be set according to practical application scene It sets.
Fig. 3 is the two of the method flow diagram of the generation method for the training data that this specification embodiment provides, shown in Fig. 3 Method includes at least following steps:
Step 302, whether the generating device detection target sample of training data is located at setting position;If so, executing Step 306, no to then follow the steps 304.
Wherein, in above-mentioned steps 302, whether can be taken the photograph positioned at RGB by the default key point on detection target sample As head preview shooting interface on the mode of predetermined position detect whether target sample is located at setting position.
Step 304, the generating device of training data is sent out to motion controller corresponding to RGB camera or motion platform Regulating command is sent, is adjusted with the shooting visual angle to target sample;Wherein, motion platform is used for drop target sample.
Step 306, image capture instruction is sent to RGB camera.
Step 308, RGB camera is after the image capture instruction that the generating device for receiving training data is sent, acquisition First image of target sample, and the image of acquisition is sent to the generating device of training data.
Step 310, the generating device of training data extracts the foreground image in the first image, and by foreground image and sets Background image is synthesized.
Step 312, the generating device of training data obtains the attribute tags of target sample.
Step 314, the generating device of training data is in such a way that green curtain is divided and/or static background is divided to the second figure As carrying out image segmentation, pixel corresponding to target sample is determined, and generate pixel mark figure corresponding to target sample Picture.
Step 316, the second image, attribute tags and pixel mark image are determined as mesh by the generating device of training data Training data corresponding to standard specimen sheet, and be locally stored.
Step 318, all training datas of target sample are uploaded to cloud and stored by the generating device of training data.
Specifically, each in method corresponding to the specific implementation process of each step and Fig. 1, Fig. 2 in embodiment corresponding to Fig. 3 The realization process of a step is identical, and therefore, the specific implementation process of each step can refer to Fig. 1, figure in embodiment corresponding to Fig. 3 Embodiment corresponding to 2, details are not described herein again.
Fig. 4 is the three of the method flow diagram of the generation method for the training data that this specification embodiment provides, shown in Fig. 4 Method includes at least following steps:
Step 402, whether the generating device detection target sample of training data is located at setting position;If so, executing Step 406, no to then follow the steps 404.
Wherein, in above-mentioned steps 402, whether can be taken the photograph positioned at RGB by the default key point on detection target sample As the mode of the predetermined position on the preview shooting interface of head and depth camera detects whether target sample is located at setting position Set place.
Step 404, the generating device of training data sends regulating command to motion controller corresponding to motion platform, with The shooting visual angle of target sample is adjusted;Wherein, motion platform is used for drop target sample.
Alternatively, in step 404, image capture device can also send to adjust to RGB camera and depth camera to be referred to It enables, by adjusting RGB camera and position, the angle of depth camera etc., realizes the adjustment to the shooting visual angle of target sample.
Step 406, image capture instruction is sent to RGB camera and depth camera.
Step 408, RGB camera is after the image capture instruction that the generating device for receiving training data is sent, acquisition The RGB image of target sample, and the RGB image of acquisition is sent to the generating device of training data;Depth camera is receiving After the image capture instruction sent to the generating device of training data, the depth image of target sample is acquired, and by the depth of acquisition Degree image is sent to the generating device of training data.
Step 410, the generating device of training data extracts the foreground image in RGB image, and according to foreground image and depth It spends image and generates virtual visual point image.
Step 412, the generating device of training data synthesizes virtual visual point image and setting background image.
Step 414, the generating device of training data obtains the attribute tags of target sample.
Step 416, the generating device of training data is in such a way that green curtain is divided and/or static background is divided to the second figure As carrying out image segmentation, pixel corresponding to target sample is determined, and generate pixel mark figure corresponding to target sample Picture.
Step 418, the second image, attribute tags and pixel mark image are determined as mesh by the generating device of training data Training data corresponding to standard specimen sheet, and be locally stored.
Step 420, all training datas of target sample are uploaded to cloud and stored by the generating device of training data.
Specifically, each in method corresponding to the specific implementation process of each step and Fig. 1, Fig. 2 in embodiment corresponding to Fig. 4 The realization process of a step is identical, and therefore, the specific implementation process of each step can refer to Fig. 1, figure in embodiment corresponding to Fig. 4 Embodiment corresponding to 2, details are not described herein again.
The generation method for the training data that this specification embodiment provides, when generating training data, by being adopted to image Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute The training data accuracy of generation is higher.
Corresponding to the generation method for the training data that this specification embodiment provides, it is based on identical thinking, this specification Embodiment provides a kind of generating means of training data, the generation of the training data for executing the offer of this specification embodiment Method, Fig. 5 are the module composition schematic diagram of the generating means for the training data that this specification embodiment provides, dress shown in fig. 5 It sets, comprising:
First sending module 501, for sending image capture instruction to image capture device, so that image capture device root According to the first image of the instruction acquisition target sample of image capture instruction;
Module 502 is obtained, for obtaining the first image;
Replacement module 503 obtains target sample for using setting background image to replace the background image of the first image Second image;Wherein, background image is the region in the first image in addition to target sample;
Labeling module 504 is instructed for being labeled by data dimensioning algorithm to the target sample in the second image Practice data.
Optionally, above-mentioned labeling module 504, comprising:
First determination unit, for determining the attribute tags of target sample;Wherein, attribute tags include at least target sample Sample names;
First generation unit, for determining pixel corresponding to target sample by carrying out image segmentation to the second image Point, and generate the mark image of pixel corresponding to target sample;
Second determination unit, for the second image, attribute tags and pixel mark image to be determined as training data.
Optionally, the device that this specification embodiment provides further include:
Second sending module, for image capture device send the first regulating command so that image capture device according to The instruction of first regulating command rotates or translation;
Alternatively,
Third sending module, for sending the second regulating command to motion controller corresponding to motion platform, so that fortune Movement controller controls motion platform rotation or movement according to the instruction of the second regulating command;Wherein, motion platform is for placing Target sample.
Optionally, above-mentioned first image is RGB image;
Above-mentioned replacement module 503, comprising:
First extraction unit, for extracting the foreground image of the first image;Wherein, foreground image is corresponding to target sample Region;
First synthesis unit obtains the second image for synthesizing foreground image and setting background image.
Optionally, above-mentioned first image includes RGB image and depth image;
Above-mentioned replacement module 503, comprising:
Second extraction unit, for extracting the foreground image of the first image;Wherein, foreground image is corresponding to target sample Region;
Second generation unit, for generating virtual view corresponding to foreground image according to foreground image and depth image Image;
Second synthesis unit obtains the second image for synthesizing virtual visual point image and setting background image.
Optionally, the device that this specification embodiment provides further include:
Enhance processing module, for carrying out data enhancing processing to above-mentioned second image.
The generating means of the training data of this specification embodiment can also carry out the generating means of training data in Fig. 1-Fig. 4 The method of execution, and the generating means of training data are realized in Fig. 1-embodiment illustrated in fig. 4 function, details are not described herein.
The generating means for the training data that this specification embodiment provides, when generating training data, by being adopted to image Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute The training data accuracy of generation is higher.
Corresponding to the generation method for the training data that this specification embodiment provides, it is based on identical thinking, this specification Embodiment additionally provides a kind of generation system of training data, and Fig. 6 is the generation for the training data that this specification embodiment provides One of structural schematic diagram of system, system shown in fig. 6, including image capture device 601 and image processing equipment 602;Image Processing equipment 602 includes the generating means of training data;
Image capture device 601, the image capture instruction that the generating device for receiving training data is sent, and according to figure As the first image of the instruction acquisition target sample of acquisition instructions;
Image processing equipment 602, for sending image capture instruction to image capture device;And it is also used to from image It acquires equipment and obtains the first image, and replace the background image of the first image using setting background image, obtain target sample Second image;Wherein, background image is the region in the first image in addition to target sample;By data dimensioning algorithm to second Target sample in image is labeled, and obtains training data.
Optionally, image processing equipment 602 are specifically used for:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;By right Second image carries out image segmentation, determines pixel corresponding to target sample, and generate pixel corresponding to target sample Mark image;Second image, attribute tags and pixel mark image are determined as training data.
Optionally, image processing equipment 602 are also used to:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, image processing equipment 602 are also used to:
Data enhancing processing is carried out to the second image.
Optionally, if the first image be RGB image, image processing equipment 602, also particularly useful for:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;By foreground image It is synthesized with setting background image, obtains the second image.
Optionally, if the first image includes RGB image and depth image;Image processing equipment 602, also particularly useful for:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;According to foreground picture Picture and depth image generate virtual visual point image corresponding to foreground image;By virtual visual point image and setting background image into Row synthesis, obtains the second image.
In a specific embodiment, the generation system of above-mentioned training data further includes motion platform 603, such as Fig. 7 institute Show, when being trained data generation, target sample is placed on motion platform 603, in addition, motion platform 603 and movement Controller 604 connects, specifically, controller 604 can integrate on 603 on operation platform, or flat independently of movement Device except platform 603.Image processing equipment 602 is by sending regulating command to motion controller, to control motion platform 603 Rotation or movement.Image processor 602 is also connect with image capture device 601, and control image capture device 601 acquires mesh Mark sample image and control image capture device 601 are rotated or are translated.
Of course, image processing equipment 602 is connect with motion controller 604, motion controller 604 and motion platform connect It connects, under the control of image processing equipment 602, control motion platform 603 is rotated or moved.
Fig. 7 illustrates only a kind of possible way of realization of the generation system of training data, the generation system of training data The concrete form of system is not limited thereto, and this specification embodiment is defined not to this.
The generation system for the training data that this specification embodiment provides, when generating training data, image processing equipment By sending image capture instruction to image capture device, the first figure of image capture device acquisition target sample can control Picture realizes the automatic collection of target sample image;In addition, also passing through data dimensioning algorithm pair in this specification embodiment Target sample in second image is labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;I.e. By this specification embodiment, the automatic training of sample data is realized, to improve the efficiency of data training, reduces people Power cost, and training data accuracy generated is higher.
Further, based on method shown in above-mentioned Fig. 1 to Fig. 4, this specification embodiment additionally provides a kind of trained number According to generating device, as shown in Figure 8.
The generating device of training data can generate bigger difference because configuration or performance are different, may include one or More than one processor 801 and memory 802 can store one or more storages in memory 802 using journey Sequence or data.Wherein, memory 802 can be of short duration storage or persistent storage.The application program for being stored in memory 802 can be with Including one or more modules (diagram is not shown), each module may include one in the generating device to training data Family computer executable instruction information.Further, processor 801 can be set to communicate with memory 802, in training The series of computation machine executable instruction information in memory 802 is executed in the generating device of data.The generation of training data is set Standby can also include one or more power supplys 803, one or more wired or wireless network interfaces 804, one or More than one input/output interface 805, one or more keyboards 806 etc..
In a specific embodiment, the generating device of training data include memory and one or one with On program, perhaps more than one program is stored in memory and one or more than one program can wrap for one of them Include one or more modules, and each module may include that series of computation machine in generating device to training data can Information is executed instruction, and is configured to execute this by one or more than one processor or more than one program includes For carrying out following computer executable instructions information:
Image capture instruction is sent to image capture device, so that instruction of the image capture device according to image capture instruction Acquire the first image of target sample;
The first image is obtained, and replaces the background image of the first image using setting background image, obtains target sample Second image;Wherein, background image is the region in the first image in addition to target sample;
The target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
Optionally, computer executable instructions information when executed, by data dimensioning algorithm in the second image Target sample is labeled, and obtains training data, comprising:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;
By carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate target sample Corresponding pixel marks image;
Second image, attribute tags and pixel mark image are determined as training data.
Optionally, computer executable instructions information when executed, sends image capture instruction to image capture device Before, following steps be can also carry out:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, computer executable instructions information when executed, by data dimensioning algorithm in the second image Target sample is labeled, and before obtaining training data, following steps can also be performed:
Data enhancing processing is carried out to the second image.
Optionally, when executed, the first image is RGB image to computer executable instructions information;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained, Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
Foreground image and setting background image are synthesized, the second image is obtained.
Optionally, when executed, the first image includes RGB image and depth image to computer executable instructions information;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained, Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
According to foreground image and depth image, virtual visual point image corresponding to foreground image is generated;
Virtual visual point image and setting background image are synthesized, the second image is obtained.
The generating device for the training data that this specification embodiment provides, when generating training data, by being adopted to image Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute The training data accuracy of generation is higher.
Further, based on method shown in above-mentioned Fig. 1 to Fig. 4, this specification embodiment additionally provides a kind of storage Jie Matter, for storing computer executable instructions information, in a kind of specific embodiment, the storage medium can for USB flash disk, CD, Hard disk etc., the computer executable instructions information of storage medium storage are able to achieve following below scheme when being executed by processor:
Image capture instruction is sent to image capture device, so that instruction of the image capture device according to image capture instruction Acquire the first image of target sample;
The first image is obtained, and replaces the background image of the first image using setting background image, obtains target sample Second image;Wherein, background image is the region in the first image in addition to target sample;
The target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
Optionally, the computer executable instructions information of storage medium storage passes through data when being executed by processor Dimensioning algorithm is labeled the target sample in the second image, obtains training data, comprising:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;
By carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate target sample Corresponding pixel marks image;
Second image, attribute tags and pixel mark image are determined as training data.
Optionally, the computer executable instructions information of storage medium storage is adopted when being executed by processor to image Before collecting equipment transmission image capture instruction, it can also carry out following steps:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, the computer executable instructions information of storage medium storage passes through data when being executed by processor Dimensioning algorithm is labeled the target sample in the second image, and before obtaining training data, following steps can also be performed:
Data enhancing processing is carried out to the second image.
Optionally, the computer executable instructions information of storage medium storage is when being executed by processor, the first image For RGB image;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained, Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
Foreground image and setting background image are synthesized, the second image is obtained.
Optionally, the computer executable instructions information of storage medium storage is when being executed by processor, the first image Including RGB image and depth image;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained, Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
According to foreground image and depth image, virtual visual point image corresponding to foreground image is generated;
Virtual visual point image and setting background image are synthesized, the second image is obtained.
The computer executable instructions information for the storage medium storage that this specification embodiment provides is being executed by processor When, when generating training data, by sending image capture instruction to image capture device, it can control image capture device and adopt The first image for collecting target sample, realizes the automatic collection of target sample image;In addition, in this specification embodiment, also The target sample in the second image is labeled by data dimensioning algorithm, realizes the automatic marking of target sample, is improved The efficiency of data mark;I.e. by this specification embodiment, the automatic training of sample data is realized, to improve data Trained efficiency reduces human cost, and training data accuracy generated is higher.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example, Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit. Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device (Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development, And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language (Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL (Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL (Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages, The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit, ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller Device: ARC 625D, Atmel AT91SAM, the address MicrochIP PIC18F26K20 and Silicone Labs C8051F320, Memory Controller are also implemented as a part of the control logic of memory.Those skilled in the art Know, it, completely can be by the way that method and step be carried out other than realizing controller in a manner of pure computer readable program code Programming in logic comes so that controller is with logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller Deng form realize identical function.Therefore this controller is considered a kind of hardware component, and includes in it The structure in hardware component can also be considered as realizing the device of various functions.It or even, can will be for realizing various The device of function is considered as either the software module of implementation method can be the structure in hardware component again.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity, Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The application is reference according to the method for this specification embodiment, the stream of equipment (system) and computer program product Journey figure and/or block diagram describe.It should be understood that can be by computer program instructions information realization flowchart and/or the block diagram The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating Machine program instruction information is to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices Processor is to generate a machine, so that the instruction executed by computer or the processor of other programmable data processing devices Information generates specifies for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram Function device.
These computer program instructions information, which may also be stored in, is able to guide computer or other programmable data processing devices In computer-readable memory operate in a specific manner, so that command information stored in the computer readable memory produces Raw includes the manufacture of command information device, the command information device realize in one or more flows of the flowchart and/or The function of being specified in one or more blocks of the block diagram.
These computer program instructions information also can be loaded onto a computer or other programmable data processing device, so that Series of operation steps are executed on a computer or other programmable device to generate computer implemented processing, thus calculating The command information that is executed on machine or other programmable devices provide for realizing in one or more flows of the flowchart and/or The step of function of being specified in one or more blocks of the block diagram.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer-readable instruction information, data structure, the module of program or other numbers According to.The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, CD-ROM are read-only Memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or Other magnetic storage devices or any other non-transmission medium, can be used for storage can be accessed by a computing device information.According to Herein defines, and computer-readable medium does not include temporary computer readable media (transitory media), such as modulation Data-signal and carrier wave.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product. Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Formula.
The application can computer executable instructions information it is general up and down described in the text, such as Program module.Generally, program module include routines performing specific tasks or implementing specific abstract data types, it is program, right As, component, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environment In, by executing task by the connected remote processing devices of communication network.In a distributed computing environment, program module It can be located in the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal Replacement, improvement etc., should be included within the scope of the claims of this application.

Claims (14)

1. a kind of generation method of training data, which comprises
Image capture instruction is sent to image capture device, so that described image acquires equipment according to described image acquisition instructions Indicate the first image of acquisition target sample;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains the mesh Second image of standard specimen sheet;Wherein, the background image is the region in the first image in addition to the target sample;
The target sample in second image is labeled by data dimensioning algorithm, obtains the training data.
2. the method as described in claim 1, it is described by data dimensioning algorithm to the target sample in second image into Rower note, obtains the training data, comprising:
Determine the attribute tags of the target sample;Wherein, the attribute tags include at least the sample name of the target sample Claim;
By carrying out image segmentation to second image, pixel corresponding to the target sample is determined, and described in generation Pixel corresponding to target sample marks image;
Second image, the attribute tags and the pixel mark image are determined as the training data.
3. the method as described in claim 1, it is described send image capture instruction to image capture device before, the method is also Include:
The first regulating command is sent to described image acquisition equipment, is referred to so that described image acquires equipment and adjusts according to described first The instruction of order rotates or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that the motion controller is according to described the The instruction of two regulating commands controls the motion platform rotation or mobile;Wherein, the motion platform is for placing the mesh Standard specimen sheet.
4. the method as described in claim 1, it is described by data dimensioning algorithm to the target sample in second image into Rower note, before obtaining the training data, the method also includes:
Data enhancing processing is carried out to second image.
5. method according to any of claims 1-4, the first image is RGB image;
Correspondingly, the background image using setting background image replacement the first image, obtains the target sample Second image, comprising:
Extract the foreground image of the first image;Wherein, the foreground image is region corresponding to the target sample;
The foreground image and the setting background image are synthesized, second image is obtained.
6. method according to any of claims 1-4, the first image includes RGB image and depth image;
Correspondingly, the background image using setting background image replacement the first image, obtains the target sample Second image, comprising:
Extract the foreground image of the first image;Wherein, the foreground image is region corresponding to the target sample;
According to the foreground image and the depth image, virtual visual point image corresponding to the foreground image is generated;
The virtual visual point image and the setting background image are synthesized, second image is obtained.
7. a kind of generating means of training data, described device include:
First sending module, for image capture device send image capture instruction so that described image acquire equipment according to First image of the instruction acquisition target sample of described image acquisition instructions;
Module is obtained, for obtaining the first image;
Replacement module obtains the target sample for using the background image of setting background image replacement the first image The second image;Wherein, the background image is the region in the first image in addition to the target sample;
Labeling module obtains described for being labeled by data dimensioning algorithm to the target sample in second image Training data.
8. device as claimed in claim 7, the labeling module, comprising:
First determination unit, for determining the attribute tags of the target sample;Wherein, the attribute tags include at least described The sample names of target sample;
First generation unit, for determining corresponding to the target sample by carrying out image segmentation to second image Pixel, and generate the mark image of pixel corresponding to the target sample;
Second determination unit, for second image, the attribute tags and the pixel mark image to be determined as institute State training data.
9. device as claimed in claim 7, described device further include:
Second sending module, for sending the first regulating command to described image acquisition equipment, so that described image acquires equipment It is rotated or is translated according to the instruction of first regulating command;
Alternatively,
Third sending module, for sending the second regulating command to motion controller corresponding to motion platform, so that the fortune Movement controller controls the motion platform rotation or movement according to the instruction of second regulating command;Wherein, the movement Platform is for placing the target sample.
10. the first image is RGB image such as claim 7-9 described in any item devices;
The replacement module, comprising:
First extraction unit, for extracting the foreground image of the first image;Wherein, the foreground image is the target sample Region corresponding to this;
First synthesis unit obtains second figure for synthesizing the foreground image and the setting background image Picture.
11. the first image includes RGB image and depth image such as claim 7-9 described in any item devices;
The replacement module, comprising:
Second extraction unit, for extracting the foreground image of the first image;Wherein, the foreground image is the target sample Region corresponding to this;
Second generation unit, for generating corresponding to the foreground image according to the foreground image and the depth image Virtual visual point image;
Second synthesis unit obtains described for synthesizing the virtual visual point image and the setting background image Two images.
12. a kind of generation system of training data, including image capture device and image processing equipment;Described image processing equipment Generating means including the training data;
Described image acquires equipment, the image capture instruction that the generating device for receiving the training data is sent, and according to First image of the instruction acquisition target sample of described image acquisition instructions;
Described image processing equipment, for sending image capture instruction to described image acquisition equipment;And it is also used to from described Image capture device obtains the first image, and using the background image of setting background image replacement the first image, obtains To the second image of the target sample;Wherein, the background image be the first image in addition to the target sample Region;The target sample in second image is labeled by data dimensioning algorithm, obtains the training data.
13. a kind of generating device of training data, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the processing when executed Device:
Image capture instruction is sent to image capture device, so that described image acquires equipment according to described image acquisition instructions Indicate the first image of acquisition target sample;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains the mesh Second image of standard specimen sheet;Wherein, the background image is the region in the first image in addition to the target sample;
The target sample in second image is labeled by data dimensioning algorithm, obtains training data.
14. a kind of storage medium, for storing computer executable instructions, the executable instruction is realized following when executed Process:
Image capture instruction is sent to image capture device, so that described image acquires equipment according to described image acquisition instructions Indicate the first image of acquisition target sample;
The first image is obtained, and using the background image of setting background image replacement the first image, obtains the mesh Second image of standard specimen sheet;Wherein, the background image is the region in the first image in addition to the target sample;
The target sample in second image is labeled by data dimensioning algorithm, obtains training data.
CN201811260697.0A 2018-10-26 2018-10-26 Training data generation method, device and system Active CN109614983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811260697.0A CN109614983B (en) 2018-10-26 2018-10-26 Training data generation method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811260697.0A CN109614983B (en) 2018-10-26 2018-10-26 Training data generation method, device and system

Publications (2)

Publication Number Publication Date
CN109614983A true CN109614983A (en) 2019-04-12
CN109614983B CN109614983B (en) 2023-06-16

Family

ID=66002359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811260697.0A Active CN109614983B (en) 2018-10-26 2018-10-26 Training data generation method, device and system

Country Status (1)

Country Link
CN (1) CN109614983B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060265A (en) * 2019-05-15 2019-07-26 北京艺泉科技有限公司 A method of divide from painting and calligraphy cultural relic images and extracts seal
CN110070540A (en) * 2019-04-28 2019-07-30 腾讯科技(深圳)有限公司 Image generating method, device, computer equipment and storage medium
CN110287850A (en) * 2019-06-20 2019-09-27 北京三快在线科技有限公司 A kind of model training and the method and device of object identification
CN110866504A (en) * 2019-11-20 2020-03-06 北京百度网讯科技有限公司 Method, device and equipment for acquiring marked data
CN111383267A (en) * 2020-03-03 2020-07-07 重庆金山医疗技术研究院有限公司 Target relocation method, device and storage medium
CN111402334A (en) * 2020-03-16 2020-07-10 深圳前海达闼云端智能科技有限公司 Data generation method and device and computer readable storage medium
CN111783874A (en) * 2020-06-29 2020-10-16 联想(北京)有限公司 Sample generation method, sample training method and device
CN112802049A (en) * 2021-03-04 2021-05-14 山东大学 Method and system for constructing household article detection data set
CN113012176A (en) * 2021-03-17 2021-06-22 北京百度网讯科技有限公司 Sample image processing method and device, electronic equipment and storage medium
CN113570534A (en) * 2021-07-30 2021-10-29 山东大学 Article identification data set expansion method and system for deep learning
CN113688887A (en) * 2021-08-13 2021-11-23 百度在线网络技术(北京)有限公司 Training and image recognition method and device of image recognition model
CN115861739A (en) * 2023-02-08 2023-03-28 海纳云物联科技有限公司 Training method, device, equipment, storage medium and product of image segmentation model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024421A (en) * 2013-01-18 2013-04-03 山东大学 Method for synthesizing virtual viewpoints in free viewpoint television
US20160027187A1 (en) * 2014-07-23 2016-01-28 Xiaomi Inc. Techniques for image segmentation
CN106162137A (en) * 2016-06-30 2016-11-23 北京大学 Virtual visual point synthesizing method and device
US20170262735A1 (en) * 2016-03-11 2017-09-14 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
CN108010034A (en) * 2016-11-02 2018-05-08 广州图普网络科技有限公司 Commodity image dividing method and device
CN108388833A (en) * 2018-01-15 2018-08-10 阿里巴巴集团控股有限公司 A kind of image-recognizing method, device and equipment
CN108492343A (en) * 2018-03-28 2018-09-04 东北大学 A kind of image combining method for the training data expanding target identification
CN108520223A (en) * 2018-04-02 2018-09-11 广州华多网络科技有限公司 Dividing method, segmenting device, storage medium and the terminal device of video image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024421A (en) * 2013-01-18 2013-04-03 山东大学 Method for synthesizing virtual viewpoints in free viewpoint television
US20160027187A1 (en) * 2014-07-23 2016-01-28 Xiaomi Inc. Techniques for image segmentation
US20170262735A1 (en) * 2016-03-11 2017-09-14 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
CN106162137A (en) * 2016-06-30 2016-11-23 北京大学 Virtual visual point synthesizing method and device
CN108010034A (en) * 2016-11-02 2018-05-08 广州图普网络科技有限公司 Commodity image dividing method and device
CN108388833A (en) * 2018-01-15 2018-08-10 阿里巴巴集团控股有限公司 A kind of image-recognizing method, device and equipment
CN108492343A (en) * 2018-03-28 2018-09-04 东北大学 A kind of image combining method for the training data expanding target identification
CN108520223A (en) * 2018-04-02 2018-09-11 广州华多网络科技有限公司 Dividing method, segmenting device, storage medium and the terminal device of video image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEONG_KG: "【数字图像处理系列四】图像数据集增强方式总结和实现", HTTPS://BLOG.CSDN.NET/FEILONG_CSDN/ARTICLE/DETAILS/82813382 *
全文: "Data Augmentation for Bounding Boxes: Rethinking Image Transforms for Object Detection", HTTPS://BLOG.PAPERSPACE.COM/DATA-AUGMENTATION-FOR-BOUNDING-BOXES/ *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070540B (en) * 2019-04-28 2023-01-10 腾讯科技(深圳)有限公司 Image generation method and device, computer equipment and storage medium
CN110070540A (en) * 2019-04-28 2019-07-30 腾讯科技(深圳)有限公司 Image generating method, device, computer equipment and storage medium
CN110060265A (en) * 2019-05-15 2019-07-26 北京艺泉科技有限公司 A method of divide from painting and calligraphy cultural relic images and extracts seal
CN110287850A (en) * 2019-06-20 2019-09-27 北京三快在线科技有限公司 A kind of model training and the method and device of object identification
CN110866504A (en) * 2019-11-20 2020-03-06 北京百度网讯科技有限公司 Method, device and equipment for acquiring marked data
CN110866504B (en) * 2019-11-20 2023-10-17 北京百度网讯科技有限公司 Method, device and equipment for acquiring annotation data
CN111383267A (en) * 2020-03-03 2020-07-07 重庆金山医疗技术研究院有限公司 Target relocation method, device and storage medium
CN111383267B (en) * 2020-03-03 2024-04-05 重庆金山医疗技术研究院有限公司 Target repositioning method, device and storage medium
CN111402334A (en) * 2020-03-16 2020-07-10 深圳前海达闼云端智能科技有限公司 Data generation method and device and computer readable storage medium
CN111402334B (en) * 2020-03-16 2024-04-02 达闼机器人股份有限公司 Data generation method, device and computer readable storage medium
CN111783874A (en) * 2020-06-29 2020-10-16 联想(北京)有限公司 Sample generation method, sample training method and device
CN112802049A (en) * 2021-03-04 2021-05-14 山东大学 Method and system for constructing household article detection data set
CN112802049B (en) * 2021-03-04 2022-10-11 山东大学 Method and system for constructing household article detection data set
CN113012176A (en) * 2021-03-17 2021-06-22 北京百度网讯科技有限公司 Sample image processing method and device, electronic equipment and storage medium
CN113012176B (en) * 2021-03-17 2023-12-15 阿波罗智联(北京)科技有限公司 Sample image processing method and device, electronic equipment and storage medium
CN113570534A (en) * 2021-07-30 2021-10-29 山东大学 Article identification data set expansion method and system for deep learning
CN113688887A (en) * 2021-08-13 2021-11-23 百度在线网络技术(北京)有限公司 Training and image recognition method and device of image recognition model
CN115861739A (en) * 2023-02-08 2023-03-28 海纳云物联科技有限公司 Training method, device, equipment, storage medium and product of image segmentation model

Also Published As

Publication number Publication date
CN109614983B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN109614983A (en) The generation method of training data, apparatus and system
Dash et al. Designing of marker-based augmented reality learning environment for kids using convolutional neural network architecture
CN105493078B (en) Colored sketches picture search
US11367259B2 (en) Method for simulating natural perception in virtual and augmented reality scenes
CN104898832B (en) Intelligent terminal-based 3D real-time glasses try-on method
US20200410211A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN108107571A (en) Image processing apparatus and method and non-transitory computer readable recording medium
CN111638784B (en) Facial expression interaction method, interaction device and computer storage medium
KR101923177B1 (en) Appratus and method for providing augmented reality information based on user
WO2024001095A1 (en) Facial expression recognition method, terminal device and storage medium
US20220101578A1 (en) Generating composite images with objects from different times
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN109815854A (en) It is a kind of for the method and apparatus of the related information of icon to be presented on a user device
JP7310123B2 (en) Imaging device and program
Kim et al. PicMe: interactive visual guidance for taking requested photo composition
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN110751668B (en) Image processing method, device, terminal, electronic equipment and readable storage medium
CN111783881A (en) Scene adaptation learning method and system based on pre-training model
CN110177216A (en) Image processing method, device, mobile terminal and storage medium
CN110012226A (en) A kind of electronic equipment and its image processing method
US20230353701A1 (en) Removing objects at image capture time
WO2023283894A1 (en) Image processing method and device
WO2023039327A1 (en) Display of digital media content on physical surface
CN111223192B (en) Image processing method, application method, device and equipment thereof
CN108109158B (en) Video crossing processing method and device based on self-adaptive threshold segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant