CN107909114A - The method and apparatus of the model of training Supervised machine learning - Google Patents

The method and apparatus of the model of training Supervised machine learning Download PDF

Info

Publication number
CN107909114A
CN107909114A CN201711236484.XA CN201711236484A CN107909114A CN 107909114 A CN107909114 A CN 107909114A CN 201711236484 A CN201711236484 A CN 201711236484A CN 107909114 A CN107909114 A CN 107909114A
Authority
CN
China
Prior art keywords
destination object
model
artificial
data
labeled data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711236484.XA
Other languages
Chinese (zh)
Other versions
CN107909114B (en
Inventor
颜沁睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201711236484.XA priority Critical patent/CN107909114B/en
Publication of CN107909114A publication Critical patent/CN107909114A/en
Application granted granted Critical
Publication of CN107909114B publication Critical patent/CN107909114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclose a kind of method and apparatus for the model for being used to train Supervised machine learning.This method includes:Generation includes the artificial image of destination object;The labeled data related with destination object is recorded during artificial image is generated;The computing entered data in execution model using artificial image as model, to obtain the derived data related with destination object;And compare derived data and labeled data to determine whether the parameter of adjusting model.In this way, substantial amounts of manually mark needed for the training process in model can be saved.

Description

The method and apparatus of the model of training Supervised machine learning
Technical field
The disclosure generally relates to the technical field of the model of supervision machine study, and more particularly to for training The method and apparatus of the model of Supervised machine learning.
Background technology
Supervised machine learning (supervised machine learning) usually requires to use substantial amounts of training sample Model is trained, and according to expected result and model by using the comparison between the obtained derivation result of training sample As a result, to determine the need for the parameter for adjusting model and the parameter for how adjusting model, so that there is model good Ground is suitable for other data (for example, actual application data) outside training sample.The model of Supervised machine learning is for example It can include artificial neural network (for example, convolutional neural networks) and decision tree etc..
Have been provided for many different training sample set or training sample database.The model of Supervised machine learning is set Meter personnel are before using the model of such sample set or sample storehouse to train Supervised machine learning, it is necessary to a large amount of Destination object in the sample of (even magnanimity) is manually marked, to provide the labeled data (example related with destination object Such as, the type of destination object, size, position etc.).Trained cost is very high, but accuracy and efficiency is but very low.
The content of the invention
On the one hand, there is provided a kind of method of model for training Supervised machine learning.This method can include:It is raw Into the artificial image comprising destination object;The labeled data related with destination object is recorded during artificial image is generated; The computing entered data in execution model using artificial image as model, to obtain the derivation number related with destination object According to;And compare derived data and labeled data to determine whether the parameter of adjusting model.
On the other hand, a kind of device for the model for being used to train Supervised machine learning is additionally provided.The device can wrap Include:Rendering engine, is configurable to generate the artificial image comprising destination object and is recorded during artificial image is generated The labeled data related with destination object;Arithmetic unit, is configured with artificial image and enters data to perform as model Computing in model, to obtain the derived data related with destination object;And adjuster, be configured as comparing derived data and Labeled data with determine whether adjust model parameter.
On the other hand, a kind of device for the model for being used to train Supervised machine learning is additionally provided.The device can wrap Include the processor for being configured as performing above-mentioned method.
On the other hand, a kind of non-transitory storage medium is additionally provided, have program stored therein instruction on it, the programmed instruction Above-mentioned method is performed when being executed by a computing apparatus.
By method and apparatus in accordance with an embodiment of the present disclosure, can save in Supervised machine learning training process Required artificial mark, so as to reduce cost, improves the accuracy of mark, and improve trained efficiency.
Brief description of the drawings
Fig. 1 shows the stream for being used to train the exemplary method of the model of Supervised machine learning in accordance with an embodiment of the present disclosure Cheng Tu.
Fig. 2 shows the example of the model of training Supervised machine learning in accordance with an embodiment of the present disclosure.
Fig. 3 shows the frame for being used to train the exemplary device of the model of Supervised machine learning in accordance with an embodiment of the present disclosure Figure.
Fig. 4 shows the frame for being used to train the exemplary device of the model of Supervised machine learning in accordance with an embodiment of the present disclosure Figure.
Embodiment
Fig. 1 shows the stream for being used to train the exemplary method of the model of Supervised machine learning in accordance with an embodiment of the present disclosure Cheng Tu.As shown in Figure 1, exemplary method 100 in accordance with an embodiment of the present disclosure can include:Step S101, generation include target The artificial image of object;Step S105, records the labeled data related with destination object during artificial image is generated;Step Rapid S110, the computing entered data in execution model using artificial image as model are related with destination object to obtain Derived data;And step S115, compare derived data and labeled data to determine whether the parameter of adjusting model.
The exemplary method 100 is described in detail with reference to the example of Fig. 2.
Exemplary method 100 may begin at step S101, and the artificial image of destination object is included with generation.
In one embodiment, as shown in Fig. 2, one or more can be obtained with connection resource storehouse 200 and from resources bank 200 Element.
Resources bank 200 can include being used for a variety of elements for generating artificial image.For example, resources bank 200 can be with Including such as head for showing " people ", arm, hand, finger, trunk, leg, foot, eyes, ear, nose, mouth, hair, Hu Image, picture or the animation of the various forms of the various parts such as palpus, eyebrow, clothes, gloves, the helmet, cap, can also include all Such as image, picture or the animation of the various forms of the various instruments of sword, spanner, wooden stick, can also include being used to show such as dynamic The image of the various forms of the various entities such as thing, plant, vehicle, building, natural land, space object and its all parts, Picture or animation.In addition, image graphic or video included in resources bank 200 can be one-dimensional, two-dimentional, three-dimensional And/or image graphic or video more on various dimensions.Resources bank 200 can also include the other elements such as audio, word.
Method in accordance with an embodiment of the present disclosure be not limited to the quantity of the included element in resources bank 200, type, Tissue (or storage) form etc., is also not limited to form, connection mode, access mode of resources bank 200 etc..
Then, in step S101, acquired one or more elements can be grouped together, and to being combined to The aggregate of element together is rendered (for example, 2D is rendered or 3D is rendered), so as to generate for example shown artificial in fig. 2 Scene 205.
Furthermore it is possible to using the people of the hand-held sword in artificial scene 205 as a destination object 210.In other example In, can be using any one or more entities in scene as one or more destination objects.For example, can be by hand-held sword Sword in the hand of people as a destination object, can also respectively by the cloud in background, wall, the people of hand-held sword and sword respectively as One destination object, can also be using the set of the cloud in background, wall, the people of hand-held sword and sword as a destination object.
In the figure 2 example, next life adult can also be projected by carrying out fish eye lens to the artificial scene 205 generated Make image 220.In other examples, other kinds of projection can also be carried out to artificial scene 205, such as wide-angle lens is thrown Shadow, standard lens projection, long shot projection etc., and polytype projection pattern can be used.Alternatively, it is also possible to not to people Make scene 205 to be projected, and artificial scene 205 is directly used as artificial image 220.
During generating artificial scene 205 or artificial image 220 in step S101, while step S105 is performed, from And the labeled data related with destination object 210 is automatically recorded during generation artificial scene 205 or artificial image 220 215.Labeled data defines each element or each element collection for generating each entity in artificial image or artificial image Fit various property values.
For example, in the figure 2 example, in step S101, it is used to generate destination object from resources bank 200 is selected 210 element can include element 201 (towards the head of the mankind in due east), 202 (arms of the expansion of the mankind), the 203 (mankind Leg and foot) and 204 (swords).These elements and selected other elements are being combined and are being rendered to artificial scene 205 In process (step S101), it can be determined and the relevant destination object of element 201 to 204 according to the attribute of element 201 to 204 210 type is " people ", is oriented " due east ", and the article held in hand is " sword " (step S105).Furthermore it is also possible to root Coordinate (the step of destination object 210 is determined according to placement position (step S101) of the element 203 in artificial scene 205 S105), and the elevation angle of destination object 210 can be determined according to the placement angle (step S101) of element 201 or 202 or 204 (step S105), etc..Furthermore it is also possible to and artificial scene 205 projects into the process (step S101) of artificial image 220 In, to determine each labeled data in the front and rear change (step S105) of projection, such as size, angles and positions etc. are before projection Change afterwards.
That is, artificial scene 205 or artificial image 220 (including the mesh in artificial scene 205 or artificial image 220 Mark object 210) it is according to the various labeled data of selected each element, selected each element is combined and wash with watercolours Contaminate to generate.Meanwhile it can record and/or determine during generation artificial scene 205 or artificial image 220 to be used to give birth to Into artificial scene 205 or each element and/or the relevant labeled data of each elements assembly of artificial image 220.
In the figure 2 example, labeled data 215 can the type (mankind) including destination object 210, position coordinates, court To information such as, the elevation angle, state, hand-held articles.In other examples, labeled data 215 can also include the shape of destination object Shape, color (for example, color of the clothes of destination object 210 in Fig. 2), size are (for example, the body of the destination object 210 in Fig. 2 It is high) etc. information.In addition, in step S105, it can also be directed to and be used to generating each of artificial scene 205 or artificial image 220 The assembly of element and/or some elements generates labeled data.
By that in step S101 and step S105, can obtain at the same time comprising the artificial scene 205 of destination object 210 or people Image 220 and labeled data relevant with destination object 210 215 are made, without for the artificial scene 205 generated or people Make the destination object 210 in image 220 and carry out other artificial mark.
Then, exemplary method 100 proceeds to step S110.As shown in Fig. 2, the artificial image 220 generated can be made For the input of the model 225 of Supervised machine learning, and it is supplied to model 225.Model 225 performs computing, and output and target The related derived data 230 of object 210.
In one embodiment, artificial image 220 can be supplied directly to model 225, can also being capable of representative Make image 220 data set be supplied to model 225 (for example, artificial image 220 be 3D rendering in the case of, can be by 3D points Set be supplied to model 225).In a further embodiment, can also will be with the relevant other information (example of artificial image 220 Such as, audio, position coordinates etc.) it is supplied to model 225.The disclosure is not limited to the particular type, specific implementation and spy of model 225 Determine task (for example, identification, prediction, 3D reconstruct), be also not limited to the specific format or particular form of model received data.
Then, exemplary method can continue to step S115, to compare labeled data 215 and derived data 230.At one In embodiment, labeled data 215 and derived data 230 can be compared, to determine whether two data are identical.For example, can compare It is whether identical compared with " type " in labeled data 215 and " type " in derived data 230.In another embodiment, also may be used To compare labeled data 215 and derived data 230, to determine whether the difference between two data exceeds threshold value.For example, can be with Compare whether the difference between " elevation angle " in " elevation angle " and derived data 230 in labeled data 215 exceeds threshold value.Threshold Value can be specified by the designer of the model 225 of Supervised machine learning when designing the model 225 of Supervised machine learning.
In the case where determining to need the parameter of adjusting model 225 according to comparative result, the ginseng of model 225 can be adjusted Number, and repeat step S110 and S115, until the output of model 225 meets the expected requirements.In one embodiment, can be in step The artificial image of varying number is generated in rapid S101, and according to the type of model and is trained pre- in step S110 and S115 Phase target and use different application condition method, parameter regulation means and anticipated conditions., can be with for example, for neutral net Multiple (for example, substantial amounts of) artificial images are generated in step S101, and using for example reverse in step S110 and S115 Propagation algorithm carrys out adjusting parameter so that and error function declines on the gradient of the partial derivative of parameter, and finally so that error function Narrow down to acceptable scope.
In training method (for example, exemplary method 100) in accordance with an embodiment of the present disclosure, in generation artificial scene or people During making image while generate labeled data, from without carrying out other artificial mark, advantageously reduce it is trained into This, and improve trained efficiency.
In addition, the sample in common training sample set or training sample database is often to the typical case in typical application Data carry out actual acquisition as a result, for example, using such as camera or record for specific crowd, specific occasion, application-specific etc. Photo, sound, the word etc. of the devices such as sound machine collection.It using such sample, may be limited the training of model or model In specific crowd, specific occasion, application-specific or used training sample set or training sample database etc..In addition, training Result accuracy and reliability also by depending on the mark knot for the sample in training sample set or training sample database Fruit, or the reference data provided depending on the supplier of training sample set or training sample database.It is for example, trained Model may show well for the sample in used training sample set or training sample database, but for other training The situation outside sample in sample set or training sample database may have larger error.
In training method in accordance with an embodiment of the present disclosure, generated artificial scene or artificial image is used to be instructed Practice, and relevant labeled data must be accurate and reliable (because artificial scene or artificial image are marked based on these Data generate).Therefore, training method in accordance with an embodiment of the present disclosure can avoid training sample set or training sample Limitation of the sample to training result in storehouse, is conducive to improve the accuracy and reliability of training.
Fig. 3 and Fig. 4 shows the example dress for being used to train the model of Supervised machine learning in accordance with an embodiment of the present disclosure The block diagram put.
As shown in figure 3, exemplary device 300 can include rendering engine 301, arithmetic unit 305 and adjuster 310.
Rendering engine 301 can be configured as artificial scene of the generation comprising destination object or artificial image and generate The labeled data related with destination object is recorded during artificial scene or artificial image.In one embodiment, render and draw One or more graphics processors (GPU) can be included by holding up 301.
Rendering engine 301 can be additionally configured to by the way that one or more of resources bank element is combined and is rendered To generate the artificial scene including destination object, and artificial figure is generated by carrying out one or more projections to artificial scene Picture.In one embodiment, rendering engine 301 can include one or more cameras, so as to the projection of such as wide-angle lens, mark One or more of the projection of quasi- camera lens, fish eye lens projection and long shot projection projection mode shoots artificial scene, from And generate artificial image.In a further embodiment, rendering engine 301 can be in a manner of directly by hardware or software to artificial Scene is converted, and artificial scene is transformed into corresponding with the result after being projected using one or more projection modes Artificial image.
In addition, rendering engine 301 can include I/O interfaces (not shown) and buffer storage, to be connect from resources bank 200 Receive and be used to generating one or more elements of artificial scene, and to the element received and/or the artificial image generated/artificial Scene and/or intermediate result are cached.
In one embodiment, renderer 301 can be configured as the step for performing the exemplary method 100 for example shown in Fig. 1 Rapid S101 and S105.
Arithmetic unit 305 can be configured as the fortune entered data in execution model using artificial image as model Calculate, to obtain the derived data related with destination object.In one embodiment, arithmetic unit 305 can include general center Processor (CPU) or the dedicated hardware accelerator of model are (for example, the parallel multiplication in the case of convolutional neural networks Deng).In one embodiment, renderer 301 can be configured as the step of performing exemplary method 100 for example shown in Fig. 1 S110。
Adjuster 310 can be configured as the parameter for comparing derived data and labeled data to determine whether adjusting model. In one embodiment, adjuster 310 can include general central processing unit (CPU) and/or comparator (not shown).Separately Outside, adjuster 310 can also include I/O interface (not shown), to receive adjusted model parameter.In one embodiment In, adjuster 310 can be configured as the step S115 for performing the exemplary method 100 for example shown in Fig. 1.
As shown in figure 4, exemplary device 400 can include one or more processors 401, memory 405 and I/O interfaces 410。
Processor 401 can be any type of processing unit for having data-handling capacity and/or instruction execution capability, Such as universal cpu, GPU or dedicated accelerator etc..For example, processor 401 can perform in accordance with an embodiment of the present disclosure Method.In addition, processor 401 can be with the miscellaneous part in control device 400, to perform desired function.Processor 401 It can be connected by bindiny mechanism's (not shown) of bus system and/or other forms with memory 405 and I/O interfaces 410.
Memory 405 can include it is various forms of it is computer-readable write storage medium, such as volatile memory and/or Nonvolatile memory.Volatile memory can for example include random access memory (RAM) and/or cache memory (cache) etc..Nonvolatile memory is such as can include read-only storage (ROM), hard disk, flash memory.It is read-write Storage medium for example can include but is not limited to system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or Combination more than person is any.For example, in the case where coordinating neutral net application specific processor use, memory 405 can also be Carry the RAM on the chip of application specific processor.Memory 405 can include performing according to the disclosure for instruction device 400 The programmed instruction of the method for embodiment.
I/O interfaces 410 can be used for providing parameter or data to processor 401 and exporting handling by processor 401 Result data.It is used to generate artificial scene or artificial to receive in addition, I/O interfaces 410 can also be connected with resources bank 200 One or more elements of image.
It should be appreciated that the device 300 and 400 shown in Fig. 3 and Fig. 4 is only exemplary, and it is nonrestrictive.According to this public affairs The device for the embodiment opened can have miscellaneous part and/or structure.
Unless context clearly requires otherwise, otherwise through specification and claims, word " comprising ", "comprising" Etc. should be explained with adversative inclusive meaning with exclusiveness or exhaustive, that is to say, that should with " including but Be not limited to " meaning explain.In addition, word " herein ", " above ", the word of " hereafter " and similar implication are in this Shen The application as a whole, rather than any specific part of the application should be referred to when middle please use.When context allows, It can also include plural number or odd number respectively using the word in the above description of singular or plural.On refer to it is two or more Word "or" during the list of project, the word cover the word it is explained below in whole:Any project in list, row All items in table, and any combinations of the project in list.
Although it have been described that some embodiments of the present disclosure, but these embodiments are only presented as example, without Intend to limit the scope of the present disclosure.In fact, method and system described herein can using various other forms come Implement.Furthermore it is possible to without departing from the scope of the disclosure, method and system described herein it is formal Make various omissions, substitutions and changes.

Claims (14)

1. a kind of method of model for training Supervised machine learning, including:
Generation includes the artificial image of destination object;
The labeled data related with the destination object is recorded during the artificial image is generated;
Using the artificial image as the computing for entering data to perform in the model of the model, with obtain with it is described The related derived data of destination object;And
Compare the derived data and the labeled data to determine whether to adjust the parameter of the model.
2. according to the method described in claim 1, wherein, generating the artificial image includes:
Generated by the way that one or more of resources bank element is combined and is rendered including the artificial of the destination object Scene;And
The artificial image is generated by carrying out one or more projections to the artificial scene.
3. according to the method described in claim 2, wherein, one or more of projections include wide-angle lens projection, standard mirror One or more of head projection, fish eye lens projection and long shot projection.
4. according to the method described in claim 1, wherein, the labeled data includes the type of the destination object, the mesh Mark shape, the color of the destination object, the size of the destination object, the position of the destination object and the mesh of object Mark one or more of direction of object.
5. according to the method described in claim 1, wherein, the derived data and the labeled data include:
Determine whether the derived data and the labeled data are identical.
6. according to the method described in claim 1, wherein, the derived data and the labeled data include:
Determine whether the difference between the derived data and the labeled data exceeds threshold value.
7. a kind of device of model for training Supervised machine learning, including:
Rendering engine, is configurable to generate the artificial image comprising destination object and during the artificial image is generated The record labeled data related with the destination object;
Arithmetic unit, is configured with the fortune that enters data to perform in the model of the artificial image as the model Calculate, to obtain the derived data related with the destination object;And
Adjuster, is configured as derived data described in comparison and the labeled data to determine whether to adjust the ginseng of the model Number.
8. device according to claim 7, wherein, the rendering engine is additionally configured to by one in resources bank Or multiple element is combined and renders includes the artificial scene of the destination object to generate, and by the artificial field Scape carries out one or more projections to generate the artificial image.
9. device according to claim 8, wherein, one or more of projections include wide-angle lens projection, standard mirror One or more of head projection, fish eye lens projection and long shot projection.
10. device according to claim 7, wherein, the labeled data includes the type of the destination object, the mesh Mark shape, the color of the destination object, the size of the destination object, the position of the destination object and the mesh of object Mark one or more of direction of object.
11. device according to claim 7, wherein, the adjuster is configured to determine that the derived data and described Whether labeled data is identical.
12. device according to claim 7, wherein, the adjuster is configured to determine that the derived data and described Whether the difference between labeled data exceeds a prescribed threshold value.
13. a kind of device of model for training Supervised machine learning, including:
Processor, is configured as performing the method according to any one of claim 1 to 6.
14. a kind of non-transitory storage medium, have program stored therein instruction on it, described program instruction is being executed by a computing apparatus Methods of the Shi Zhihang according to any one of claim 1 to 6.
CN201711236484.XA 2017-11-30 2017-11-30 Method and apparatus for training supervised machine learning models Active CN107909114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711236484.XA CN107909114B (en) 2017-11-30 2017-11-30 Method and apparatus for training supervised machine learning models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711236484.XA CN107909114B (en) 2017-11-30 2017-11-30 Method and apparatus for training supervised machine learning models

Publications (2)

Publication Number Publication Date
CN107909114A true CN107909114A (en) 2018-04-13
CN107909114B CN107909114B (en) 2020-07-17

Family

ID=61848114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711236484.XA Active CN107909114B (en) 2017-11-30 2017-11-30 Method and apparatus for training supervised machine learning models

Country Status (1)

Country Link
CN (1) CN107909114B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063155A (en) * 2018-08-10 2018-12-21 广州锋网信息科技有限公司 Language model parameter determination method, device and computer equipment
CN109447240A (en) * 2018-09-28 2019-03-08 深兰科技(上海)有限公司 A kind of model training method, computer readable storage medium and calculate equipment
CN110750694A (en) * 2019-09-29 2020-02-04 支付宝(杭州)信息技术有限公司 Data annotation implementation method and device, electronic equipment and storage medium
CN110866138A (en) * 2018-08-17 2020-03-06 京东数字科技控股有限公司 Background generation method and system, computer system, and computer-readable storage medium
CN112640037A (en) * 2018-09-03 2021-04-09 首选网络株式会社 Learning device, inference device, learning model generation method, and inference method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120290511A1 (en) * 2011-05-11 2012-11-15 Affectivon Ltd. Database of affective response and attention levels
CN103605667A (en) * 2013-10-28 2014-02-26 中国计量学院 Automatic image annotation algorithm
CN105631479A (en) * 2015-12-30 2016-06-01 中国科学院自动化研究所 Imbalance-learning-based depth convolution network image marking method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120290511A1 (en) * 2011-05-11 2012-11-15 Affectivon Ltd. Database of affective response and attention levels
CN103605667A (en) * 2013-10-28 2014-02-26 中国计量学院 Automatic image annotation algorithm
CN105631479A (en) * 2015-12-30 2016-06-01 中国科学院自动化研究所 Imbalance-learning-based depth convolution network image marking method and apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063155A (en) * 2018-08-10 2018-12-21 广州锋网信息科技有限公司 Language model parameter determination method, device and computer equipment
CN109063155B (en) * 2018-08-10 2020-08-04 广州锋网信息科技有限公司 Language model parameter determination method and device and computer equipment
CN110866138A (en) * 2018-08-17 2020-03-06 京东数字科技控股有限公司 Background generation method and system, computer system, and computer-readable storage medium
CN112640037A (en) * 2018-09-03 2021-04-09 首选网络株式会社 Learning device, inference device, learning model generation method, and inference method
CN109447240A (en) * 2018-09-28 2019-03-08 深兰科技(上海)有限公司 A kind of model training method, computer readable storage medium and calculate equipment
CN109447240B (en) * 2018-09-28 2021-07-02 深兰科技(上海)有限公司 Training method of graphic image replication model, storage medium and computing device
CN110750694A (en) * 2019-09-29 2020-02-04 支付宝(杭州)信息技术有限公司 Data annotation implementation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107909114B (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN107909114A (en) The method and apparatus of the model of training Supervised machine learning
Ge et al. 3d hand shape and pose estimation from a single rgb image
CN111354079B (en) Three-dimensional face reconstruction network training and virtual face image generation method and device
US10685454B2 (en) Apparatus and method for generating synthetic training data for motion recognition
US10380759B2 (en) Posture estimating apparatus, posture estimating method and storing medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
WO2017193906A1 (en) Image processing method and processing system
CN112884881B (en) Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
Stoll et al. Fast articulated motion tracking using a sums of gaussians body model
US9317970B2 (en) Coupled reconstruction of hair and skin
JP5227463B2 (en) Visual target tracking
CN108335353A (en) Three-dimensional rebuilding method, device and system, server, the medium of dynamic scene
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
CN110008806A (en) Storage medium, learning processing method, learning device and object identification device
JP6409433B2 (en) Image generation apparatus, image detection system, and image generation method
CN109872375B (en) Skeleton animation key frame compression method and device
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
CN107862387A (en) The method and apparatus for training the model of Supervised machine learning
CN110060348A (en) Facial image shaping methods and device
GB2606785A (en) Adaptive convolutions in neural networks
EP3756163A1 (en) Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics
EP3408765A2 (en) Method and system for generating a synthetic database of postures and gestures
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
WO2020098566A1 (en) Three-dimensional modeling method and device, and computer readable storage medium
CN113763535A (en) Characteristic latent code extraction method, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant