Specific embodiment
In order to make those skilled in the art better understand the technical solutions in the application, below in conjunction with this specification
Attached drawing in embodiment is clearly and completely described the technical solution in this specification embodiment, it is clear that described
Embodiment is merely a part but not all of the embodiments of the present application.Based on the embodiment in the application, this field
Those of ordinary skill's every other embodiment obtained without creative efforts, all should belong to the application
The range of protection.
This specification embodiment provides generation method, device, system and the storage medium of a kind of training data, Ke Yishi
The automatic collection of existing training data, automatic marking, and then realize automatically generating for training data;The life of training data can be improved
At efficiency and the accuracy of training data generated.
A kind of concrete application field of this specification embodiment institute providing method can be unmanned retail domain, nobody zero
Field is sold, in order to realize the transaction of commodity, the commodity bought to user is needed to identify.Specifically, image can be used
The identification of identification model realization commodity, it is therefore desirable to preparatory training image identification model.And in training image identification model, then
It needs to generate training data.
Fig. 1 is one of the method flow diagram of generation method of training data that this specification embodiment provides, shown in FIG. 1
Method includes at least following steps:
Step 102, image capture instruction is sent to image capture device, so that image capture device refers to according to Image Acquisition
First image of the instruction acquisition target object of order.
This specification embodiment provide method executing subject be training data generating device, as computer, mobile phone,
The equipment that tablet computer etc. has image processing function, specifically, can be the instruction being mounted in the generating device of training data
Practice the generating means of data.
Specifically, communication connection is established between the generating device and image capture device of training data, when needing to carry out mesh
When the Image Acquisition of standard specimen sheet, the generating device of training data sends image capture instruction, Image Acquisition to image capture device
After equipment receives the image capture instruction, according to the image of the instruction acquisition target object of the image capture instruction, it is denoted as the
One image.
Wherein, in the specific implementation, above-mentioned image capture instruction can be the generating device institute that user passes through training data
Triggering, it can also be and send image capture instruction from trend image capture device according to setpoint frequency.This specification is not to upper
The triggering mode for stating image capture instruction is defined.
In this specification embodiment, above-mentioned target sample can be model, animal or plant specimen or the quotient sold
Any objects such as product, particular content representated by above-mentioned target sample can be determined according to practical application scene, this specification
Embodiment is not defined the contents such as the specific category of target sample, title.
Specifically, may need to acquire the target from multiple visual angles in some cases due to being directed to a target sample
The image of sample.Therefore, in the specific implementation, it can be first figure that target sample has been acquired by image capture device
As after, just first image of account by acquisition is sent to image capture device, so that image capture device is generated for this
The training data of one image;Certainly, in other specific embodiments, after the image that a target sample can have been acquired, then
The training data for being directed to target sample is generated by the generating device of training data.
Step 104, the first image is obtained, and replaces the background image of the first image using setting background image, obtains mesh
Second image of standard specimen sheet;Wherein, background image is the region in the first image in addition to target sample.
Wherein, at step 104, the first image that image capture device is sent is received, due in acquisition target sample
When the first image, content collected includes the background area that target sample and target sample are presently in, and therefore, can be incited somebody to action
Region corresponding to target sample in first image is denoted as foreground image, and region corresponding to background is denoted as background image.
And in specific application, target sample may be placed in different scenes, at this moment locating for target sample
Background can then change.Because in order to further increase the recognition accuracy of trained image recognition model, in this explanation
In book embodiment, after getting the first image of target sample, needs to be implemented and replace the first image using setting background image
Background image the step of.
Wherein, above-mentioned setting background image is then background locating for target sample in practical application scene.For example, above-mentioned mesh
Standard specimen sheet is the commodity for needing to be sold in self-service machine, and when carrying out commodity identification, the background of commodity collected
For the corresponding background in commodity placement location place in self-service machine, therefore, in order to further increase the accurate of training data
Property, so that the resolution of trained image recognition model is improved, it can be by the background in the first image of commodity collected
Image replaces with the image of background corresponding to the commodity in self-service machine.
Step 106, the target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
In above-mentioned steps 106, the target sample in the second image is labeled, actually to target sample
The attribute informations such as title, the classification of position and target sample in two images are labeled.It certainly, in addition to this can also be right
The other information of target sample is labeled, and this specification embodiment is defined not to this.
Specifically, in step 106, being labeled to position of the target sample in the second image, can be and mark out
Pixel corresponding to target sample, so that the precision of the mark of target sample is higher.
In this specification embodiment, then by the attribute informations such as the title of target sample, classification, and the target sample of mark
Labeled data of the pixel point image as target sample corresponding to this, using the second image and corresponding labeled data as instruction
Practice data.
In this specification embodiment, by sending image capture instruction to image capture device, it can control image and adopt
The first image for collecting equipment acquisition target sample, realizes the automatic collection of target sample image;In addition, implementing in this specification
In example, also the target sample in the second image is labeled by data dimensioning algorithm, realizes the automatic mark of target sample
Note improves the efficiency of data mark;To realize automatically generating for training data, the generation effect of training data is improved
Rate reduces human cost, and the accuracy of training data generated is higher.
The generation method of training data provided by this specification embodiment for ease of understanding, it is following to be discussed in detail one by one
The specific implementation process of above steps.
In this specification embodiment, above-mentioned image capture device can be taken the photograph for (Red, Green, Blue, RGB) RGB
As head, correspondingly, the first image collected is then RGB image;Alternatively, in another embodiment, in order to obtain mesh
The steric information of standard specimen sheet increases the dimensional information of training data, and in this specification embodiment, above-mentioned image capture device is removed
Include outside RGB camera, can also include depth camera, for acquiring the depth image of target sample, in this kind of situation
Under, the first image of target sample collected includes RGB image and depth image.And it is directed to above two different situation,
Using the specific implementation process of the background image of setting background image the first image of replacement, there are difference in step 104, following to incite somebody to action
The specific implementation process of above-mentioned steps 104 is discussed respectively.
The first situation,
Above-mentioned image capture device only includes RGB camera, correspondingly, above-mentioned first image is RGB image;
In the case of this kind, at step 104, the background image of the first image is replaced using setting background image, obtains mesh
Second image of standard specimen sheet includes the following steps (1) and step (2);
Step (1), the foreground image for extracting the first image;Wherein, foreground image is region corresponding to target sample;
Step (2) synthesizes above-mentioned foreground image and setting background image, obtains the second image.
In this specification embodiment, the foreground image in the first image can be extracted by way of exposure mask (mask),
Detailed process is as follows:
Image gray processing, binaryzation are carried out to the first image, extract the profile of target sample in the first image;It is one newly-built
With the equal-sized mask image of the first image, for example, if the size of the first image be 640*480, built mask figure
The size of picture is also 640*480, wherein 640 and 480 be the number of pixel;By all pictures on newly-built mask image
The pixel value of vegetarian refreshments is initialized as 0, and at this moment mask image is then a completely black image.
In mask image, area-of-interest is irised out using the profile of the target sample of said extracted, and by region of interest
The pixel value of pixel in domain is disposed as 255, in this way, being then white area in area-of-interest;I.e. on mask image,
The pixel value of pixel in area-of-interest be it is non-zero, the pixel value of the pixel of regions of non-interest is 0, by mask image
Upper each pixel is carried out with pixel corresponding on the first image and operation, and regions of non-interest is with the result of operation
Zero, therefore target sample region (area-of-interest) is left behind in obtained image, the pixel value of remaining area pixel point is equal
It is zero, as black;
Target sample region is directly finally plucked out from obtained image, using the target area plucked out as the first figure
The foreground image of picture.
It is certainly, above-mentioned only to describe one of specific implementation process that foreground image is extracted from the first image,
Foreground image can also be extracted from the first image by other means, this specification embodiment will not enumerate.
It should be noted that foreground image and setting background image are synthesized, can be in above-mentioned steps (2)
The overlap-add region for determining foreground image in setting background image now, is then superimposed upon setting background image for foreground image
Overlap-add region.
Second situation,
Above-mentioned image capture device includes RGB camera and depth camera, acquires target sample by RGB camera
RGB image acquires the depth image of target sample by depth camera, and therefore, the first image of target sample includes RGB figure
Picture and depth image;
In the case of this kind, at step 104, the background image of the first image is replaced using setting background image, obtains mesh
Second image of standard specimen sheet, specifically comprises the following steps (one), step (2) and step (3);
Step (1), the foreground image for extracting the first image;Wherein, foreground image is region corresponding to target sample;
Step (2), according to above-mentioned foreground image and depth image, generate virtual visual point image corresponding to foreground image;
Step (3) synthesizes above-mentioned virtual visual point image and setting background image, obtains the second image.
Wherein, the specific implementation process of above-mentioned steps (one) can refer to the specific implementation of step (1) in the first situation
Journey, details are not described herein again.
In above-mentioned steps (two), according to foreground image and depth image, virtual view corresponding to foreground image is generated
Image specifically includes following process:
Foreground image is projected according to depth image, foreground image is projected in world coordinate system, prospect is obtained
Projection coordinate of the image in world coordinate system;Then the projection coordinate further according to foreground image in world coordinate system, projection
Into virtual image plane, to obtain virtual visual point image of the foreground image in virtual projection plane.
Certainly, above-mentioned that the mode of virtual visual point image not office is generated according to flat image (foreground image) and depth image
It is limited to this, can also realizes by other means, this specification embodiment will not enumerate.
In this specification embodiment, by the depth image and the available target sample of RGB image that acquire target sample
The information of this more various dimensions, so that the data information of training data generated is more.
Specifically, as shown in Fig. 2, in above-mentioned steps 106, by data dimensioning algorithm to the target sample in the second image
Originally it is labeled, obtains training data, specifically comprise the following steps:
Step 1062, the attribute tags of target sample are determined;Wherein, which includes at least the sample of target sample
Title;
Step 1064, by carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate
Pixel corresponding to target sample marks image;
Step 1066, the second image, attribute tags and pixel mark image are determined as training data.
It should be noted that the attribute tags of target sample refer to each of target sample in this specification embodiment
Attribute information, for example, can be the life of the classification, the title of target sample, the price of target sample, target sample of target sample
The information such as the date of manufacture of the place of production and target sample.
Wherein, in above-mentioned steps 1062, the attribute tags of target sample can be at least determined by the following two kinds mode:
The first, sample names, the sample class of the target sample of generating device input of the user by training data are received
The information such as not, using the sample information of user's input as the attribute tags of target sample.
In the specific implementation, the target sample in the second image is labeled in the generating device execution of training data
When step, can on the screen of the generating device of training data displaying target sample attribute label input frame so that user is defeated
Enter the attribute information of target sample, the generating device of training data receives the association attributes letter of the target sample of user's input
Breath;Alternatively, target sample attribute tags input frame can also be sent to the terminal device of user by the generating device of training data,
So that user inputs the attribute information of target sample by terminal device, the generating device of training data receives user and passes through terminal
The attribute information for the target sample that equipment is sent.
The second, during controlling the first image of image capture device acquisition target sample, scanning can also be controlled
Equipment scans the identification code on target sample, such as bar code, identifies the information such as sample names, the classification of target sample;Right
When target sample in second image is labeled, the generating device of training data obtains the sample of target sample from scanning device
The information such as title, classification, the attribute tags as target sample.
Alternatively, in the specific implementation, but when image capture device is labeled to the target sample in the second image
When, scan instruction is sent to the scanning device connecting with image capture device, so that the knowledge on scanning device scanning target sample
Other code, and identify the attribute informations such as the sample names of target sample, classification, and the attribute information of identification is sent to Image Acquisition
Equipment, the attribute tags as target sample.
Specifically, in above-mentioned steps 1064, it can be by the way of the segmentation of green curtain and/or static background segmentation to second
Image is split;Belong to existing skill due to being split by the way of the segmentation of green curtain or static background segmentation to image
Art, therefore details are not described herein again for its specific implementation process.
It is inconvenient due to carrying out pixel mark in the original image of the second image, and following model may be generated
It has an impact, therefore, in this specification embodiment, after determining pixel corresponding to target sample, a width can be generated
The second new image, and pixel corresponding to target sample is marked out in the second newly-generated image, it is denoted as target sample
Corresponding pixel marks image.Wherein, it should be noted that the second newly-generated image is with former second image be it is identical, newly
The purpose for generating the second image is exactly for pixel corresponding to label target sample.
Specifically, when the pixel corresponding to label target sample, it can be by the pixel circle corresponding to target sample
It is set as the modes such as identical value out or by the pixel value of pixel corresponding to target sample.
By the above process, then training data corresponding to a wherein image for target sample can be generated, specifically,
It can be the training data at one of visual angle of target sample.It, can be by the second image, mesh after completing target sample mark
Image is corresponding is stored for the pixel of the attribute tags of standard specimen sheet and target sample mark, wherein as target sample
One training data.
Specifically, after obtaining a training data of target sample, it can be first by the training data in training data
Generating device is locally stored;Then, second training data for starting generation target sample, when obtaining, target sample institute is right
After all training datas answered, all training datas corresponding to target sample can be uploaded to cloud, and carry out beyond the clouds
Storage.
For example, in the specific implementation, it can be from the image of acquisition target sample different perspectives, and target sample is generated respectively
The training data at each visual angle.If first time can acquire the front view of target sample, training corresponding to front view is being obtained
After data, the generating device that training data corresponding to target sample front view is stored in training data is local;Continue to acquire
The left side view of target sample, and carry out above-mentioned image processing process, obtains training data corresponding to left side view, and by mesh
The generating device that training data corresponding to this left side view of standard specimen is stored in training data is local;It is further continued for target sample later
The processes such as Image Acquisition, the processing of right view, after obtaining target sample and having training data corresponding to side view, by target
The corresponding all training datas of sample are uploaded to cloud and are stored.
Certainly, in the specific implementation, it can also be after the training data for obtaining all target samples, then execute and will train
Data are uploaded to the step of cloud is stored.
In this specification embodiment, training data is distally being stored, one side data storage safety compared with
Height, furthermore it is also possible to the use of the subsequent training data of aspect.
It should be noted that in the specific implementation, after obtaining one of training data of target sample, can pass through
Regulating command is sent to image capture device, so that Image Acquisition camera shooting can be to target sample by operations such as rotation, translations
This other visual angles are shot;Alternatively, in the specific implementation, it is flat that target sample can also be placed on to transportable movement
On platform, motion controller is provided on the motion platform, the generating device of training data is adjusted by sending to motion controller
Section instruction so that motion controller control motion platform rotated, translated according to the instruction of regulating command, being risen or under
Drop, so that image capture device can shoot other visual angles of target sample.
For ease of understanding, following to be illustrated citing.
For example, needing to acquire the side view of target sample, at this moment, Ke Yixiang after having acquired the front view of target sample
The instruction that motion controller on motion platform sends positive hour hands or is rotated by 90 ° counterclockwise, alternatively, can also be to Image Acquisition
The instruction that equipment sends positive hour hands or is rotated by 90 ° counterclockwise, so that image capture device can acquire target sample side view
Figure.
In addition, in this specification embodiment, before the image of acquisition target sample, in order to which target sample can be collected
The image at this setting visual angle, it is also desirable to which the position of position or image capture device to target sample is adjusted.
Either in first image of acquisition target sample, or after the acquisition of image is opened in completion first, mesh is acquired
Before the image at other visual angles of standard specimen sheet, require that target sample or image capture device is adjusted.Therefore, it is holding
Before row above-mentioned steps 102, the method that this specification embodiment provides further includes following steps:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command
Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second
The instruction control motion platform rotation or mobile of section instruction;Wherein, above-mentioned motion platform is used for drop target sample.
It, can be with by sending regulating command to image capture device or motion controller in this specification embodiment
The automatic adjustment of target sample shooting visual angle is realized, so as to realize the shooting for carrying out multi-angle of view to target sample, also, not
Needing user to manually adjust image capture device or the position of target sample can be realized, simple to operate.
In addition, in the specific implementation, being sent to motion controller corresponding to image capture device or motion platform
Before regulating command, the current position of target sample can detecte, to determine the need for image capture device or movement
Motion controller corresponding to platform sends regulating command, and determines the particular content of regulating command.
Specifically, can pass through in detection target sample at the current position of detection target sample and preset key point
The detection of position progress target specimen location.For example, target sample is filling cola, then several passes can be selected on laughable tank
Image capture device is directed at laughable tank, detects the preview whether preset several key points are located at image capture device by key point
Shoot the predetermined position on interface.
In addition, in order to improve the data volume of target sample, the diversity for increasing target sample data, improve trained mould
The robustness of type, therefore before executing above-mentioned steps 106, the method that this specification embodiment provides further includes following steps:
Data enhancing processing is carried out to the second image.
Specifically, above-mentioned carry out data enhancing processing to the second image, actually the second image is rotated, is translated
Deng operation, or the second image is amplified, is reduced, the operation such as color jitter, to obtain multiple second figures of target sample
Picture can be labeled multiple second images, in subsequent step 106 so that instruction corresponding to target sample respectively
Practicing data has diversity.
In addition, in this specification embodiment, in order to enable in the first image that shooting obtains the effect of target sample with
Relatively, before executing above-mentioned steps 104, the method that this specification embodiment provides further includes realistic objective sample effect
Following steps:
Image preprocessing is carried out to the first image.
Specifically, it is above-mentioned to the first image carry out image preprocessing specifically can be the resolution ratio to the first image, brightness,
The parameters such as color are adjusted, so that the effect of target object and realistic objective object is closer in the first image.
In addition, in this specification embodiment, before the image of acquisition target sample, it is also necessary to image capture device
Parameter, illumination parameter, the kinematic parameter of motion platform etc. be configured, can specifically be set according to practical application scene
It sets.
Fig. 3 is the two of the method flow diagram of the generation method for the training data that this specification embodiment provides, shown in Fig. 3
Method includes at least following steps:
Step 302, whether the generating device detection target sample of training data is located at setting position;If so, executing
Step 306, no to then follow the steps 304.
Wherein, in above-mentioned steps 302, whether can be taken the photograph positioned at RGB by the default key point on detection target sample
As head preview shooting interface on the mode of predetermined position detect whether target sample is located at setting position.
Step 304, the generating device of training data is sent out to motion controller corresponding to RGB camera or motion platform
Regulating command is sent, is adjusted with the shooting visual angle to target sample;Wherein, motion platform is used for drop target sample.
Step 306, image capture instruction is sent to RGB camera.
Step 308, RGB camera is after the image capture instruction that the generating device for receiving training data is sent, acquisition
First image of target sample, and the image of acquisition is sent to the generating device of training data.
Step 310, the generating device of training data extracts the foreground image in the first image, and by foreground image and sets
Background image is synthesized.
Step 312, the generating device of training data obtains the attribute tags of target sample.
Step 314, the generating device of training data is in such a way that green curtain is divided and/or static background is divided to the second figure
As carrying out image segmentation, pixel corresponding to target sample is determined, and generate pixel mark figure corresponding to target sample
Picture.
Step 316, the second image, attribute tags and pixel mark image are determined as mesh by the generating device of training data
Training data corresponding to standard specimen sheet, and be locally stored.
Step 318, all training datas of target sample are uploaded to cloud and stored by the generating device of training data.
Specifically, each in method corresponding to the specific implementation process of each step and Fig. 1, Fig. 2 in embodiment corresponding to Fig. 3
The realization process of a step is identical, and therefore, the specific implementation process of each step can refer to Fig. 1, figure in embodiment corresponding to Fig. 3
Embodiment corresponding to 2, details are not described herein again.
Fig. 4 is the three of the method flow diagram of the generation method for the training data that this specification embodiment provides, shown in Fig. 4
Method includes at least following steps:
Step 402, whether the generating device detection target sample of training data is located at setting position;If so, executing
Step 406, no to then follow the steps 404.
Wherein, in above-mentioned steps 402, whether can be taken the photograph positioned at RGB by the default key point on detection target sample
As the mode of the predetermined position on the preview shooting interface of head and depth camera detects whether target sample is located at setting position
Set place.
Step 404, the generating device of training data sends regulating command to motion controller corresponding to motion platform, with
The shooting visual angle of target sample is adjusted;Wherein, motion platform is used for drop target sample.
Alternatively, in step 404, image capture device can also send to adjust to RGB camera and depth camera to be referred to
It enables, by adjusting RGB camera and position, the angle of depth camera etc., realizes the adjustment to the shooting visual angle of target sample.
Step 406, image capture instruction is sent to RGB camera and depth camera.
Step 408, RGB camera is after the image capture instruction that the generating device for receiving training data is sent, acquisition
The RGB image of target sample, and the RGB image of acquisition is sent to the generating device of training data;Depth camera is receiving
After the image capture instruction sent to the generating device of training data, the depth image of target sample is acquired, and by the depth of acquisition
Degree image is sent to the generating device of training data.
Step 410, the generating device of training data extracts the foreground image in RGB image, and according to foreground image and depth
It spends image and generates virtual visual point image.
Step 412, the generating device of training data synthesizes virtual visual point image and setting background image.
Step 414, the generating device of training data obtains the attribute tags of target sample.
Step 416, the generating device of training data is in such a way that green curtain is divided and/or static background is divided to the second figure
As carrying out image segmentation, pixel corresponding to target sample is determined, and generate pixel mark figure corresponding to target sample
Picture.
Step 418, the second image, attribute tags and pixel mark image are determined as mesh by the generating device of training data
Training data corresponding to standard specimen sheet, and be locally stored.
Step 420, all training datas of target sample are uploaded to cloud and stored by the generating device of training data.
Specifically, each in method corresponding to the specific implementation process of each step and Fig. 1, Fig. 2 in embodiment corresponding to Fig. 4
The realization process of a step is identical, and therefore, the specific implementation process of each step can refer to Fig. 1, figure in embodiment corresponding to Fig. 4
Embodiment corresponding to 2, details are not described herein again.
The generation method for the training data that this specification embodiment provides, when generating training data, by being adopted to image
Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target
The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image
Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality
Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute
The training data accuracy of generation is higher.
Corresponding to the generation method for the training data that this specification embodiment provides, it is based on identical thinking, this specification
Embodiment provides a kind of generating means of training data, the generation of the training data for executing the offer of this specification embodiment
Method, Fig. 5 are the module composition schematic diagram of the generating means for the training data that this specification embodiment provides, dress shown in fig. 5
It sets, comprising:
First sending module 501, for sending image capture instruction to image capture device, so that image capture device root
According to the first image of the instruction acquisition target sample of image capture instruction;
Module 502 is obtained, for obtaining the first image;
Replacement module 503 obtains target sample for using setting background image to replace the background image of the first image
Second image;Wherein, background image is the region in the first image in addition to target sample;
Labeling module 504 is instructed for being labeled by data dimensioning algorithm to the target sample in the second image
Practice data.
Optionally, above-mentioned labeling module 504, comprising:
First determination unit, for determining the attribute tags of target sample;Wherein, attribute tags include at least target sample
Sample names;
First generation unit, for determining pixel corresponding to target sample by carrying out image segmentation to the second image
Point, and generate the mark image of pixel corresponding to target sample;
Second determination unit, for the second image, attribute tags and pixel mark image to be determined as training data.
Optionally, the device that this specification embodiment provides further include:
Second sending module, for image capture device send the first regulating command so that image capture device according to
The instruction of first regulating command rotates or translation;
Alternatively,
Third sending module, for sending the second regulating command to motion controller corresponding to motion platform, so that fortune
Movement controller controls motion platform rotation or movement according to the instruction of the second regulating command;Wherein, motion platform is for placing
Target sample.
Optionally, above-mentioned first image is RGB image;
Above-mentioned replacement module 503, comprising:
First extraction unit, for extracting the foreground image of the first image;Wherein, foreground image is corresponding to target sample
Region;
First synthesis unit obtains the second image for synthesizing foreground image and setting background image.
Optionally, above-mentioned first image includes RGB image and depth image;
Above-mentioned replacement module 503, comprising:
Second extraction unit, for extracting the foreground image of the first image;Wherein, foreground image is corresponding to target sample
Region;
Second generation unit, for generating virtual view corresponding to foreground image according to foreground image and depth image
Image;
Second synthesis unit obtains the second image for synthesizing virtual visual point image and setting background image.
Optionally, the device that this specification embodiment provides further include:
Enhance processing module, for carrying out data enhancing processing to above-mentioned second image.
The generating means of the training data of this specification embodiment can also carry out the generating means of training data in Fig. 1-Fig. 4
The method of execution, and the generating means of training data are realized in Fig. 1-embodiment illustrated in fig. 4 function, details are not described herein.
The generating means for the training data that this specification embodiment provides, when generating training data, by being adopted to image
Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target
The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image
Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality
Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute
The training data accuracy of generation is higher.
Corresponding to the generation method for the training data that this specification embodiment provides, it is based on identical thinking, this specification
Embodiment additionally provides a kind of generation system of training data, and Fig. 6 is the generation for the training data that this specification embodiment provides
One of structural schematic diagram of system, system shown in fig. 6, including image capture device 601 and image processing equipment 602;Image
Processing equipment 602 includes the generating means of training data;
Image capture device 601, the image capture instruction that the generating device for receiving training data is sent, and according to figure
As the first image of the instruction acquisition target sample of acquisition instructions;
Image processing equipment 602, for sending image capture instruction to image capture device;And it is also used to from image
It acquires equipment and obtains the first image, and replace the background image of the first image using setting background image, obtain target sample
Second image;Wherein, background image is the region in the first image in addition to target sample;By data dimensioning algorithm to second
Target sample in image is labeled, and obtains training data.
Optionally, image processing equipment 602 are specifically used for:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;By right
Second image carries out image segmentation, determines pixel corresponding to target sample, and generate pixel corresponding to target sample
Mark image;Second image, attribute tags and pixel mark image are determined as training data.
Optionally, image processing equipment 602 are also used to:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command
Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second
The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, image processing equipment 602 are also used to:
Data enhancing processing is carried out to the second image.
Optionally, if the first image be RGB image, image processing equipment 602, also particularly useful for:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;By foreground image
It is synthesized with setting background image, obtains the second image.
Optionally, if the first image includes RGB image and depth image;Image processing equipment 602, also particularly useful for:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;According to foreground picture
Picture and depth image generate virtual visual point image corresponding to foreground image;By virtual visual point image and setting background image into
Row synthesis, obtains the second image.
In a specific embodiment, the generation system of above-mentioned training data further includes motion platform 603, such as Fig. 7 institute
Show, when being trained data generation, target sample is placed on motion platform 603, in addition, motion platform 603 and movement
Controller 604 connects, specifically, controller 604 can integrate on 603 on operation platform, or flat independently of movement
Device except platform 603.Image processing equipment 602 is by sending regulating command to motion controller, to control motion platform 603
Rotation or movement.Image processor 602 is also connect with image capture device 601, and control image capture device 601 acquires mesh
Mark sample image and control image capture device 601 are rotated or are translated.
Of course, image processing equipment 602 is connect with motion controller 604, motion controller 604 and motion platform connect
It connects, under the control of image processing equipment 602, control motion platform 603 is rotated or moved.
Fig. 7 illustrates only a kind of possible way of realization of the generation system of training data, the generation system of training data
The concrete form of system is not limited thereto, and this specification embodiment is defined not to this.
The generation system for the training data that this specification embodiment provides, when generating training data, image processing equipment
By sending image capture instruction to image capture device, the first figure of image capture device acquisition target sample can control
Picture realizes the automatic collection of target sample image;In addition, also passing through data dimensioning algorithm pair in this specification embodiment
Target sample in second image is labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;I.e.
By this specification embodiment, the automatic training of sample data is realized, to improve the efficiency of data training, reduces people
Power cost, and training data accuracy generated is higher.
Further, based on method shown in above-mentioned Fig. 1 to Fig. 4, this specification embodiment additionally provides a kind of trained number
According to generating device, as shown in Figure 8.
The generating device of training data can generate bigger difference because configuration or performance are different, may include one or
More than one processor 801 and memory 802 can store one or more storages in memory 802 using journey
Sequence or data.Wherein, memory 802 can be of short duration storage or persistent storage.The application program for being stored in memory 802 can be with
Including one or more modules (diagram is not shown), each module may include one in the generating device to training data
Family computer executable instruction information.Further, processor 801 can be set to communicate with memory 802, in training
The series of computation machine executable instruction information in memory 802 is executed in the generating device of data.The generation of training data is set
Standby can also include one or more power supplys 803, one or more wired or wireless network interfaces 804, one or
More than one input/output interface 805, one or more keyboards 806 etc..
In a specific embodiment, the generating device of training data include memory and one or one with
On program, perhaps more than one program is stored in memory and one or more than one program can wrap for one of them
Include one or more modules, and each module may include that series of computation machine in generating device to training data can
Information is executed instruction, and is configured to execute this by one or more than one processor or more than one program includes
For carrying out following computer executable instructions information:
Image capture instruction is sent to image capture device, so that instruction of the image capture device according to image capture instruction
Acquire the first image of target sample;
The first image is obtained, and replaces the background image of the first image using setting background image, obtains target sample
Second image;Wherein, background image is the region in the first image in addition to target sample;
The target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
Optionally, computer executable instructions information when executed, by data dimensioning algorithm in the second image
Target sample is labeled, and obtains training data, comprising:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;
By carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate target sample
Corresponding pixel marks image;
Second image, attribute tags and pixel mark image are determined as training data.
Optionally, computer executable instructions information when executed, sends image capture instruction to image capture device
Before, following steps be can also carry out:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command
Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second
The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, computer executable instructions information when executed, by data dimensioning algorithm in the second image
Target sample is labeled, and before obtaining training data, following steps can also be performed:
Data enhancing processing is carried out to the second image.
Optionally, when executed, the first image is RGB image to computer executable instructions information;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained,
Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
Foreground image and setting background image are synthesized, the second image is obtained.
Optionally, when executed, the first image includes RGB image and depth image to computer executable instructions information;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained,
Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
According to foreground image and depth image, virtual visual point image corresponding to foreground image is generated;
Virtual visual point image and setting background image are synthesized, the second image is obtained.
The generating device for the training data that this specification embodiment provides, when generating training data, by being adopted to image
Collect equipment and send image capture instruction, can control the first image of image capture device acquisition target sample, realize target
The automatic collection of sample image;In addition, in this specification embodiment, also by data dimensioning algorithm to the mesh in the second image
Standard specimen is originally labeled, and realizes the automatic marking of target sample, improves the efficiency of data mark;Pass through this specification reality
Example is applied, the automatic training of sample data is realized, to improve the efficiency of data training, reduces human cost, and institute
The training data accuracy of generation is higher.
Further, based on method shown in above-mentioned Fig. 1 to Fig. 4, this specification embodiment additionally provides a kind of storage Jie
Matter, for storing computer executable instructions information, in a kind of specific embodiment, the storage medium can for USB flash disk, CD,
Hard disk etc., the computer executable instructions information of storage medium storage are able to achieve following below scheme when being executed by processor:
Image capture instruction is sent to image capture device, so that instruction of the image capture device according to image capture instruction
Acquire the first image of target sample;
The first image is obtained, and replaces the background image of the first image using setting background image, obtains target sample
Second image;Wherein, background image is the region in the first image in addition to target sample;
The target sample in the second image is labeled by data dimensioning algorithm, obtains training data.
Optionally, the computer executable instructions information of storage medium storage passes through data when being executed by processor
Dimensioning algorithm is labeled the target sample in the second image, obtains training data, comprising:
Determine the attribute tags of target sample;Wherein, attribute tags include at least the sample names of target sample;
By carrying out image segmentation to the second image, pixel corresponding to target sample is determined, and generate target sample
Corresponding pixel marks image;
Second image, attribute tags and pixel mark image are determined as training data.
Optionally, the computer executable instructions information of storage medium storage is adopted when being executed by processor to image
Before collecting equipment transmission image capture instruction, it can also carry out following steps:
The first regulating command is sent to image capture device, so that instruction of the image capture device according to the first regulating command
Rotation or translation;
Alternatively,
The second regulating command is sent to motion controller corresponding to motion platform, so that motion controller is adjusted according to second
The instruction control motion platform rotation or mobile of section instruction;Wherein, motion platform is used for drop target sample.
Optionally, the computer executable instructions information of storage medium storage passes through data when being executed by processor
Dimensioning algorithm is labeled the target sample in the second image, and before obtaining training data, following steps can also be performed:
Data enhancing processing is carried out to the second image.
Optionally, the computer executable instructions information of storage medium storage is when being executed by processor, the first image
For RGB image;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained,
Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
Foreground image and setting background image are synthesized, the second image is obtained.
Optionally, the computer executable instructions information of storage medium storage is when being executed by processor, the first image
Including RGB image and depth image;
Correspondingly, replacing the background image of the first image using setting background image, the second image of target sample is obtained,
Include:
Extract the foreground image of the first image;Wherein, foreground image is region corresponding to target sample;
According to foreground image and depth image, virtual visual point image corresponding to foreground image is generated;
Virtual visual point image and setting background image are synthesized, the second image is obtained.
The computer executable instructions information for the storage medium storage that this specification embodiment provides is being executed by processor
When, when generating training data, by sending image capture instruction to image capture device, it can control image capture device and adopt
The first image for collecting target sample, realizes the automatic collection of target sample image;In addition, in this specification embodiment, also
The target sample in the second image is labeled by data dimensioning algorithm, realizes the automatic marking of target sample, is improved
The efficiency of data mark;I.e. by this specification embodiment, the automatic training of sample data is realized, to improve data
Trained efficiency reduces human cost, and training data accuracy generated is higher.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example,
Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So
And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit.
Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause
This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device
(Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate
Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer
Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker
Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled
Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development,
And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language
(Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL
(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description
Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL
(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby
Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present
Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer
This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages,
The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing
The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can
Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit,
ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller
Device: ARC 625D, Atmel AT91SAM, the address MicrochIP PIC18F26K20 and Silicone Labs
C8051F320, Memory Controller are also implemented as a part of the control logic of memory.Those skilled in the art
Know, it, completely can be by the way that method and step be carried out other than realizing controller in a manner of pure computer readable program code
Programming in logic comes so that controller is with logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller
Deng form realize identical function.Therefore this controller is considered a kind of hardware component, and includes in it
The structure in hardware component can also be considered as realizing the device of various functions.It or even, can will be for realizing various
The device of function is considered as either the software module of implementation method can be the structure in hardware component again.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity,
Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used
Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play
It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment
The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this
The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The application is reference according to the method for this specification embodiment, the stream of equipment (system) and computer program product
Journey figure and/or block diagram describe.It should be understood that can be by computer program instructions information realization flowchart and/or the block diagram
The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating
Machine program instruction information is to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices
Processor is to generate a machine, so that the instruction executed by computer or the processor of other programmable data processing devices
Information generates specifies for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
Function device.
These computer program instructions information, which may also be stored in, is able to guide computer or other programmable data processing devices
In computer-readable memory operate in a specific manner, so that command information stored in the computer readable memory produces
Raw includes the manufacture of command information device, the command information device realize in one or more flows of the flowchart and/or
The function of being specified in one or more blocks of the block diagram.
These computer program instructions information also can be loaded onto a computer or other programmable data processing device, so that
Series of operation steps are executed on a computer or other programmable device to generate computer implemented processing, thus calculating
The command information that is executed on machine or other programmable devices provide for realizing in one or more flows of the flowchart and/or
The step of function of being specified in one or more blocks of the block diagram.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction information, data structure, the module of program or other numbers
According to.The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory
(SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory
(ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, CD-ROM are read-only
Memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or
Other magnetic storage devices or any other non-transmission medium, can be used for storage can be accessed by a computing device information.According to
Herein defines, and computer-readable medium does not include temporary computer readable media (transitory media), such as modulation
Data-signal and carrier wave.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want
There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product.
Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application
Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code
The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
The application can computer executable instructions information it is general up and down described in the text, such as
Program module.Generally, program module include routines performing specific tasks or implementing specific abstract data types, it is program, right
As, component, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environment
In, by executing task by the connected remote processing devices of communication network.In a distributed computing environment, program module
It can be located in the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art
For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal
Replacement, improvement etc., should be included within the scope of the claims of this application.