CN109726760A - The method and device of training picture synthetic model - Google Patents

The method and device of training picture synthetic model Download PDF

Info

Publication number
CN109726760A
CN109726760A CN201811636004.3A CN201811636004A CN109726760A CN 109726760 A CN109726760 A CN 109726760A CN 201811636004 A CN201811636004 A CN 201811636004A CN 109726760 A CN109726760 A CN 109726760A
Authority
CN
China
Prior art keywords
picture
group
synthetic model
module
synthesising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811636004.3A
Other languages
Chinese (zh)
Other versions
CN109726760B (en
Inventor
于海泳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN201811636004.3A priority Critical patent/CN109726760B/en
Publication of CN109726760A publication Critical patent/CN109726760A/en
Application granted granted Critical
Publication of CN109726760B publication Critical patent/CN109726760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the method and devices of training picture synthetic model, in this scenario, picture synthetic model is trained according to simulation picture and corresponding true picture, until picture synthetic model is restrained, since the similarity of the picture and true picture of the synthesis of convergent picture synthetic model is higher, in this way, it can be using the picture of convergent picture synthetic model synthesis as the training sample for needing to use in deep learning method, there is no need to actual acquisition training samples, therefore, training effectiveness can be improved, on the other hand the picture under different scenes can also be synthesized, therefore, also it can solve and manually acquire the larger problem of picture difficulty under special screne.

Description

The method and device of training picture synthetic model
Technical field
The present invention relates to depth learning technology fields, in particular to train the method and device of picture synthetic model.
Background technique
With the development of science and technology and social progress, artificial intelligence technology using more and more extensive, such as drive automatically It sails field, robot field and safety-security area and all uses artificial intelligence technology.Target detection and identification, target following, scene The computer visions cognition technology such as semantic analysis is the key component in artificial intelligence technology, mainly passes through deep learning side at present Method promotes the performance of computer vision cognition technology.
Need to use a large amount of training sample (hundreds of thousands, even more picture up to a million) in deep learning method to instruct Practice network model to have the defects that take a long time if training sample is all artificial actual acquisition, for some particular fields The scape problem larger there is also difficulty.
Therefore, how to generate picture true to nature is particularly important.
Summary of the invention
The present invention is directed at least solve one of the technical problems existing in the prior art, trained picture synthetic model is proposed Method and device.
To achieve the above object, in a first aspect, the embodiment of the invention provides a kind of method of trained picture synthetic model, Include:
First group of picture and second group of picture are inputted into picture synthetic model, using the picture synthetic model respectively from institute The abstract characteristics extracted in first group of picture and second group of picture and indicate same physical meaning are stated, and obtained according to extraction Abstract characteristics handle first group of picture or second group of picture, obtain synthesising picture;First group of picture To simulate picture, second group of picture is true picture corresponding with first group of picture;
If the synthesising picture is third group picture, the third group picture is to handle to obtain to first group of picture , then the synthesising picture is compared with any one picture in second group of picture;If the synthesising picture is 4th group of picture, the 4th group of picture are handled second group of picture, then by the synthesising picture with it is described Any one picture in first group of picture is compared;
Judge whether the picture synthetic model restrains according to comparison result, if the picture synthetic model is restrained, stops Only train the picture synthetic model.
Preferably, it is extracted from first group of picture and second group of picture respectively using the picture synthetic model Indicate the abstract characteristics of same physical meaning, comprising:
First group of picture and second group of picture are decoded obtain first group of data and second group of data respectively;
Extract the abstract characteristics for indicating same physical meaning respectively from first group of data and second group of data.
Preferably, according to extract obtained abstract characteristics to first group of picture or second group of picture at Reason, obtains synthesising picture, comprising:
The abstract characteristics extracted respectively are characterized using same physical parameter;
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in first group of data, and to replacement after First group of data encoding, obtain synthesising picture;Or
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data, and to replacement after Second group of data encoding, obtain synthesising picture.
Preferably, first group of picture obtains in the following way:
Using emulated physics engine, threedimensional model is established;
Virtual camera is set in the emulated physics engine;
The threedimensional model is shot using the virtual camera, obtains first group of picture.
Preferably, the threedimensional model is shot using the virtual camera, obtains first group of picture, comprising:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
Preferably, the synthesising picture is compared with any one picture in first group of picture, comprising:
The first picture synthesis loss of the synthesising picture is calculated with reference to any one picture in first group of picture Value;
The synthesising picture is compared with any one picture in second group of picture, comprising:
The second picture synthesis loss of the synthesising picture is calculated with reference to any one picture in second group of picture Value;
Judge whether the picture synthetic model restrains according to comparison result, comprising:
Judge whether first picture being calculated synthesis penalty values and/or second picture synthesis penalty values reach To convergence threshold;
If first picture synthesis penalty values and/or second picture synthesis penalty values that are calculated reach convergence Threshold value determines the picture synthetic model convergence.
Further, the method also includes:
If the picture synthetic model is not restrained, the picture synthetic model is adjusted, and synthesize mould to picture adjusted Type inputs picture with the training picture synthetic model, until the picture synthetic model is restrained.
Further, before judging whether the picture synthetic model restrains according to comparison result, further includes:
The third group picture is inputted into the picture synthetic model, using the picture synthetic model respectively from described the The abstract characteristics for indicating same physical meaning are extracted in one group of picture and the third group picture, and are abstracted according to what extraction obtained Feature handles the third group picture, obtains synthesising picture, is according to the synthesising picture that the third group picture obtains 5th group of picture;
The 5th group of picture is compared with any one picture in first group of picture;
Judge whether the picture synthetic model restrains according to comparison result, comprising:
According at least one set of picture in the third group picture, the 4th group of picture and the 5th group of picture Comparison result, judge whether the picture synthetic model restrains.
Second aspect, the embodiment of the invention provides a kind of devices of trained picture synthetic model, comprising:
Characteristic extracting module utilizes the figure for first group of picture and second group of picture to be inputted picture synthetic model Piece synthetic model extracts the abstract spy for indicating same physical meaning from first group of picture and second group of picture respectively Sign, first group of picture are simulation picture, and second group of picture is true picture corresponding with first group of picture;
Synthesis module, for according to extract obtained abstract characteristics to first group of picture or second group of picture into Row processing, obtains synthesising picture;
Comparison module, if being third group picture for the synthesising picture, the third group picture is to described first group What picture was handled, then the synthesising picture is compared with any one picture in second group of picture;If institute Stating synthesising picture is the 4th group of picture, and the 4th group of picture is handled second group of picture, then by the conjunction It is compared at picture with any one picture in first group of picture;
Judgment module, for judging whether the picture synthetic model restrains according to comparison result, if the picture synthesizes Model is restrained, then picture synthetic model described in deconditioning.
Preferably, the characteristic extracting module includes decoder module and subcharacter extraction module, in which:
The decoder module, for decoding obtain first group of number respectively to first group of picture and second group of picture According to second group of data;
The subcharacter extraction module is indicated for extracting respectively from first group of data and second group of data The abstract characteristics of same physical meaning.
Preferably, the synthesis module includes sameization module, processing module and coding module, in which:
The sameization module, the abstract characteristics for that will extract respectively are characterized using same physical parameter;
The processing module indicates the abstract characteristics for replacing the same physical parameter in first group of data Parameter, or the same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data;
The coding module, for obtaining synthesising picture to replaced first group of data encoding;Or to replaced Two groups of data encodings, obtain synthesising picture.
It further, further include model building module, acquisition module, in which:
The model building module establishes threedimensional model for utilizing emulated physics engine;
The acquisition module, for virtual camera to be arranged in the emulated physics engine, using by the virtual camera The threedimensional model is shot, first group of picture is obtained.
Preferably, the acquisition module is specifically used for:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
Preferably, the comparison module is specifically used for:
The first picture synthesis loss of the synthesising picture is calculated with reference to any one picture in first group of picture Value;Or the second picture synthesis penalty values of the synthesising picture are calculated with reference to any one picture in second group of picture;
The judgment module includes convergence judgment module and determining module, in which:
The convergence judgment module, first picture synthesis penalty values and/or described for judging to be calculated Whether two pictures synthesis penalty values reach convergence threshold;
The determining module, if first picture synthesis loss being calculated for the convergence judgment module judgement Value and/or second picture synthesis penalty values reach convergence threshold, determine the picture synthetic model convergence.
Further, described device further include:
Module is adjusted, if not restraining for the picture synthetic model, adjusts the picture synthetic model, and to after adjustment Picture synthetic model input picture with the training picture synthetic model, until the picture synthetic model is restrained.
Further, the characteristic extracting module is also used to, and the third group picture is inputted the picture synthetic model, Being extracted from first group of picture and the third group picture respectively using the picture synthetic model indicates that same physical contains The abstract characteristics of justice;
The synthesis module is also used to, and is handled according to obtained abstract characteristics are extracted the third group picture, is obtained It is the 5th group of picture according to the synthesising picture that the third group picture obtains to synthesising picture;
The comparison module is also used to, by any one picture in the 5th group of picture and first group of picture into Row compares;
The judgment module is specifically used for, according to for the third group picture, the 4th group of picture and the described 5th The comparison result of at least one set of picture in group picture, judges whether the picture synthetic model restrains.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs, when one or more of programs are by one or more A processor executes, so that one or more of processors are realized such as the preferred implementation below first aspect or first aspect Method described in example.
Fourth aspect, the embodiment of the invention provides a kind of computer-readable mediums, are stored thereon with computer program, In, the side as described in the preferred embodiment below first aspect or first aspect is realized when described program is executed by processor Method.
In the embodiment of the present invention, a kind of propose trained picture synthetic model method, comprising: by first group of picture and second Group picture inputs picture synthetic model, using the picture synthetic model respectively from first group of picture and second group picture Extract the abstract characteristics for indicating same physical meaning in piece, and according to extract obtained abstract characteristics to first group of picture or Second group of picture is handled, and synthesising picture is obtained;First group of picture is simulation picture, and second group of picture is True picture corresponding with first group of picture;If the synthesising picture is third group picture, the third group picture is First group of picture is handled, then by any one picture in the synthesising picture and second group of picture into Row compares;If the synthesising picture is the 4th group of picture, the 4th group of picture is handled second group of picture, Then the synthesising picture is compared with any one picture in first group of picture;Described in being judged according to comparison result Whether picture synthetic model restrains, if the picture synthetic model is restrained, picture synthetic model described in deconditioning.In the party In case, picture synthetic model is trained according to simulation picture and corresponding true picture, until picture synthetic model is restrained, due to The similarity of the picture and true picture of convergent picture synthetic model synthesis is higher, in this way, can be closed using convergent picture At the picture of model synthesis as the training sample for needing to use in deep learning method, there is no need to actual acquisition deep learnings The a large amount of training sample for needing to use in method, it is thus possible to improve training effectiveness, on the other hand can also synthesize different scenes Under picture, accordingly it is also possible to solve the problems, such as manually to acquire picture difficulty under special screne larger.
In addition, during training picture synthetic model, by synthesising picture and first group of picture or described second Group picture in any one picture be compared, judge whether the picture synthetic model restrains, that is to say, that compare when Time does not need to add label to reference base picture in advance, using the method for unsupervised training, therefore, for this angle, It can also be improved the efficiency of trained picture synthetic model.
Detailed description of the invention
Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes part of specification, with this hair Bright embodiment is used to explain the present invention together, is not construed as limiting the invention.By reference to attached drawing to detailed example reality It applies example to be described, the above and other feature and advantage will become apparent those skilled in the art, in the accompanying drawings:
Figure 1A is a kind of schematic diagram of the method for trained picture synthetic model provided in an embodiment of the present invention;
Figure 1B is another schematic diagram of the method for trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2A is a kind of schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2 B is another schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2 C is another schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2 D is another schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2 E is another schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention;
Fig. 2 F is another schematic diagram of the device of trained picture synthetic model provided in an embodiment of the present invention.
Specific embodiment
To make those skilled in the art more fully understand technical solution of the present invention, the present invention is mentioned with reference to the accompanying drawing The method of the training picture synthetic model of confession is described in detail.
Example embodiment will hereinafter be described more fully hereinafter with reference to the accompanying drawings, but the example embodiment can be with difference Form embodies and should not be construed as being limited to embodiment set forth herein.Conversely, the purpose for providing these embodiments is It is thoroughly and complete to make the disclosure, and those skilled in the art will be made to fully understand the scope of the present disclosure.
As it is used herein, term "and/or" includes any and all combinations of one or more associated listed entries.
Term as used herein is only used for description specific embodiment, and is not intended to limit the disclosure.As used herein , "one" is also intended to "the" including plural form singular, unless in addition context is expressly noted that.It will also be appreciated that Be, when in this specification use term " includes " and/or " by ... be made " when, specify there are the feature, entirety, step, Operation, element and/or component, but do not preclude the presence or addition of other one or more features, entirety, step, operation, element, Component and/or its group.
Embodiment described herein can be by the idealized schematic diagram of the disclosure and reference planes figure and/or sectional view are retouched It states.It therefore, can be according to manufacturing technology and/or tolerance come modified example diagram.Therefore, embodiment is not limited to reality shown in the drawings Apply example, but the modification of the configuration including being formed based on manufacturing process.Therefore, the area illustrated in attached drawing, which has, schematically to be belonged to Property, and the shape in area as shown in the figure instantiates the concrete shape in the area of element, but is not intended to restrictive.
Unless otherwise defined, the otherwise meaning of all terms (including technical and scientific term) used herein and this field The normally understood meaning of those of ordinary skill is identical.It will also be understood that such as those those of limit term in common dictionary and answer When being interpreted as having and its consistent meaning of meaning under the background of the relevant technologies and the disclosure, and will be not interpreted as having There are idealization or excessively formal meaning, unless clear herein so limit.
Embodiment one
Refering to fig. 1 shown in A, in the embodiment of the present invention, a kind of method 10 of trained picture synthetic model is proposed, comprising:
Step 100: first group of picture and second group of picture being inputted into picture synthetic model, utilize the picture synthetic model The abstract characteristics for indicating same physical meaning are extracted from first group of picture and second group of picture respectively.
Step 110: according to extract obtained abstract characteristics to first group of picture or second group of picture at Reason, obtains synthesising picture;First group of picture is simulation picture, and second group of picture is opposite with first group of picture The true picture answered.
Step 120: if the synthesising picture is third group picture, the third group picture is at first group of picture What reason obtained, then the synthesising picture is compared with any one picture in second group of picture;If the synthesis Picture is the 4th group of picture, and the 4th group of picture is handled second group of picture, then by the synthesising picture It is compared with any one picture in first group of picture.
Step 130: judging whether the picture synthetic model restrains according to comparison result, if the picture synthetic model is received It holds back, thens follow the steps 140.
Step 140: picture synthetic model described in deconditioning.
The process of step 100- step 140 description is that the picture synthetic model convergence is determined according to comparison result, into And the case where deconditioning picture synthetic model.But in practical applications, there is also determine picture according to comparison result to close The case where not restraining at model continues to train picture synthetic model, circulation at this time, it may be necessary to be adjusted picture synthetic model The step of execution obtains synthesising picture, compares and judge can specifically refer to following process until picture synthetic model is restrained:
Further, the method 100 further includes following steps:
If the picture synthetic model is not restrained, execute step 150: adjusting the picture synthetic model, and to after adjustment Picture synthetic model input picture with the training picture synthetic model, until the picture synthetic model is restrained, such as Figure 1B It is shown.
It should be noted that the picture synthetic model of the input picture in step 100 can be initial model, it is also possible to To initial model after training but not convergent model.
For example, the first picture synthetic model is initial model, when starting to train, first judge that the first picture synthesizes mould Whether type restrains, if convergence, deconditioning model adjust to obtain second picture if not restraining to the first picture synthetic model Synthetic model, and judge whether second picture synthetic model restrains, if convergence, deconditioning model;If not restraining, to second Picture synthetic model adjusts to obtain third picture synthetic model, and so on, until the picture synthetic model convergence that training obtains.
In the embodiment of the present invention, the picture synthetic model is adjusted, and input picture to picture synthetic model adjusted When the picture synthetic model described with training, it is preferable that can be in the following way:
First group of picture and second group of picture are inputted to picture synthetic model adjusted, and mould is synthesized with the training picture Type.
Certainly, the whether convergent accuracy of picture synthetic model is judged in order to further improve, can also use other Sample is trained, and e.g., inputs M group picture and M+1 group picture to picture synthetic model adjusted with the training picture Synthetic model, M are not equal to 1, and the M group picture is simulation picture, and the M+1 group picture is and the M group picture phase Corresponding true picture.
That is, training process can use identical training each time during training picture synthetic model Sample such as all uses first group of picture and second group of picture;Alternatively, new training sample can also be used, M group is such as all used Picture and M+1 group picture;Or periodically adjusting training sample, the new training sample of such as every training 5 times uses carry out Training, it is of course also possible to use other modes, are not specifically limited herein, as long as can achieve the purpose that trained.
It in the scheme described in method 10, obtains there are two types of the modes of synthesising picture, one is what is obtained according to extraction Abstract characteristics handle first group of picture, it is assumed that are A class synthesising picture, another kind is obtained according to extraction Abstract characteristics second group of picture handled, it is assumed that be B class synthesising picture, the type of synthesising picture is not Together, used reference base picture is different when relatively, and e.g., synthesising picture is A class synthesising picture, by A class synthesising picture with Any one picture in second group of picture is compared;Synthesising picture is B class synthesising picture, by B class synthesising picture with Any one picture in first group of picture is compared.Due to being directed to different types of synthesising picture, compare in execution Used reference base picture is different during step, it is thus possible to improve judging the constringent of picture synthetic model Accuracy.
It should be noted that being obtained in step 110 there are two types of the modes of synthesising picture, but it is not limited to and holds each time Row step 110 can all generate A class synthesising picture and B class synthesising picture simultaneously, in practical applications, may only generate the synthesis of A class Picture, at this point, what is executed in step 120 is to carry out any one picture in the synthesising picture and second group of picture Compare this operation, and then judges whether picture synthetic model restrains according to this comparison result;Alternatively, in practical applications, B class synthesising picture may be only generated, at this point, executed in step 120 be will be in the synthesising picture and first group of picture Any one picture be compared this operation, and then judge whether picture synthetic model restrains according to this comparison result; Or constringent accuracy is judged in order to improve, while generating A class synthesising picture and B class synthesising picture, at this point, step It had both been executed in 120 and the synthesising picture is compared this operation with any one picture in first group of picture, It executes and the synthesising picture is compared this operation with any one picture in second group of picture, and then according to this Two kinds of comparison results judge whether picture synthetic model restrains, and are only all determining picture synthesis according to both comparison results In the convergent situation of model, it just can determine that picture synthetic model is restrained.
Preferably, it is extracted from first group of picture and second group of picture respectively using the picture synthetic model It, can be in the following way when indicating the abstract characteristics of same physical meaning:
First group of picture and second group of picture are decoded obtain first group of data and second group of data respectively;
Extract the abstract characteristics for indicating same physical meaning respectively from first group of data and second group of data.
Preferably, according to extract obtained abstract characteristics to first group of picture or second group of picture at Reason, can be in the following way when obtaining synthesising picture:
The abstract characteristics extracted respectively are characterized using same physical parameter;
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in first group of data, and to replacement after First group of data encoding, obtain synthesising picture;Or
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data, and to replacement after Second group of data encoding, obtain synthesising picture.
In the embodiment of the present invention, the abstract characteristics extracted can be the highest network layer in every group of data, will respectively Obtained highest layer network is extracted to characterize using same physical parameter.
For example, first group of picture is the simulation picture about mansion A, first group of data that first group of picture is decoded Including 10 layer networks, second group of picture is the true picture about mansion A, the second group of data decoded to second group of picture Including 8 layer networks, then the 8th layer network in the 10th layer network and second group of data in first group of data is used into identical object Manage parameter characterization.
Above-mentioned example is that the network number of plies of the data decoded with first group of picture and second group of picture decode The network number of plies difference of data is illustrated, but it's not limited to that, the network for the data that first group of picture decodes The network number of plies for the data that the number of plies and second group of picture decode may be identical, and such as corresponding network of first group of data includes 10 Layer, the corresponding network of second group of data also includes 10 layers.
Certainly, as long as extracting the abstract characteristics for indicating same physical meaning in two groups of data when extracting, Also what not specific restriction was extracted is which layer network in network.
Preferably, to treated first group of data or treated second group of data encode when, can adopt With predictive coding, or transform domain coding technology can also be used, be not specifically limited herein.
Preferably, it can be realized respectively by respective module when being decoded and encoding, such as pass through decoding network Module is decoded first group of picture and second group of picture, by coding network module to treated first group of data and Two groups of data are encoded, and further, decoding network module may include the first decoding network module and the second decoding network Module, the first decoding network module are decoded first group of picture, and the second decoding network module solves second group of picture Code, coding network module may include the first coding network module and the second coding network module, the first coding network module pair Replaced first group of data are encoded, and the second coding network module encodes replaced second group of data.Certainly, Above-mentioned only example executes decoding and encoding operation by decoding network module and coding network module, but is not limited to this Example, if it can be realized decoding function and encoding function, meanwhile, it is not necessarily limited to decoding network module and coding Network module executes aforesaid operations, and being also not necessarily limited to decoding network module includes two decoding network modules, coding Network module includes two coding network modules.
Similarly, when execution judges the step whether the picture synthetic model restrains according to comparison result, can pass through Judgment module, can also specifically be divided into two judgment modules, first judgment module be used for according to by the synthesising picture with it is described The comparison result that any one picture in first group of picture is compared judged, the second judgment module is used for according to by institute The comparison result that synthesising picture is compared with any one picture in second group of picture is stated to be judged, certainly, on Only example is stated, is not limited to centainly by judgment module come the implementation of the judgment, is also possible to other modules to operate, only It can be realized the function of judgement, certainly, be not also limited to through 2 judgment modules and judged respectively.
Largely there is randomness and multifarious picture since emulated physics engine can synthesize, the present invention is real It applies in example, it is preferable that first group of picture obtains in the following way:
Using emulated physics engine, threedimensional model is established;
Virtual camera is set in the emulated physics engine;
The threedimensional model is shot using the virtual camera, obtains first group of picture.
In this way, first group of picture obtained in the embodiment of the present invention has diversity.
Preferably, the threedimensional model is shot using the virtual camera, when obtaining first group of picture, can used Such as under type:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
It preferably, can when the synthesising picture to be compared with any one picture in first group of picture With in the following way:
The first picture synthesis loss of the synthesising picture is calculated with reference to any one picture in first group of picture Value;
It, can be using such as when the synthesising picture to be compared with any one picture in second group of picture Under type:
The second picture synthesis loss of the synthesising picture is calculated with reference to any one picture in second group of picture Value.
In the case where picture obtained by calculation synthesis penalty values are to be compared, judged according to comparison result described in When whether picture synthetic model restrains, it is preferable that in the following way:
Judge whether first picture being calculated synthesis penalty values and/or second picture synthesis penalty values reach To convergence threshold;
If first picture synthesis penalty values and/or second picture synthesis penalty values that are calculated reach convergence Threshold value determines the picture synthetic model convergence.
Further, if first picture synthesis penalty values and/or second picture synthesis loss that are calculated Value is not up to convergence threshold, determines that the picture synthetic model is not restrained, and continues to train picture synthetic model.
Above-mentioned give synthesizes penalty values by the picture of picture to judge the whether convergent example of picture synthetic model, when It so, can be with other modes to determine whether convergence, or can judge the whether convergent ginseng of picture synthetic model by other Number is not limited to the example to be judged.
It should be noted that the convergence threshold in the embodiment of the present invention can become with the difference of application scenarios Change, for example, convergence threshold is larger under the higher scene of accuracy for the picture for requiring picture synthetic model to synthesize, such as 8.0, In the case where requiring the accuracy of picture of picture synthetic model synthesis not to be very high scene, convergence threshold can slightly increase, such as 8.2 are adjusted to, certainly, 8.0 and 8.2 be all specific example, and it's not limited to that.
Disclosed in one cycle, using for the comparison result of third group picture and/or the 4th group of picture come Judge whether picture synthetic model restrains, in order to further improve the accuracy of judgement, the figure is judged according to comparison result Before whether piece synthetic model restrains, further includes:
Third group picture is inputted into the picture synthetic model, using the picture synthetic model respectively from described first group The abstract characteristics for indicating same physical meaning, and the abstract characteristics obtained according to extraction are extracted in picture and the third group picture The third group picture is handled, synthesising picture is obtained, is the 5th according to the synthesising picture that the third group picture obtains Group picture;
The 5th group of picture is compared with any one picture in first group of picture;
Judge whether the picture synthetic model restrains according to comparison result, comprising:
According at least one set of picture in the third group picture, the 4th group of picture and the 5th group of picture Comparison result, judge whether the picture synthetic model restrains.
Being extracted from first group of picture and the third group picture respectively using the picture synthetic model indicates phase With the abstract characteristics of physical meaning, comprising:
First group of picture and the third group picture are decoded obtain first group of data and third group data respectively;
Extract the abstract characteristics for indicating same physical meaning respectively from first group of data and the third group data.
The third group picture is handled according to obtained abstract characteristics are extracted, obtains synthesising picture, comprising:
The abstract characteristics extracted respectively are characterized using same physical parameter;
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in the third group data, and to replacement after Third group data encoding, obtain the 5th group of picture.
Similarly, the 4th group of picture of synthesis can also be inputted into picture synthetic model, obtains synthesising picture, such as the 6th group picture Piece, at this point, when judging whether the picture synthetic model restrains according to comparison result, it can be in the following way:
According in the third group picture, the 4th group of picture, the 5th group of picture and the 6th group of picture The comparison result of at least one set of picture, judges whether the picture synthetic model restrains.
In this scenario, picture synthetic model is trained according to simulation picture and corresponding true picture, until picture closes It is restrained at model, since the similarity of the picture and true picture of the synthesis of convergent picture synthetic model is higher, in this way, can adopt Use the picture of convergent picture synthetic model synthesis as the training sample for needing to use in deep learning method, there is no need to realities The a large amount of training sample for needing to use in the sampling depth learning method of border, it is thus possible to improve training effectiveness, on the other hand also The picture under different scenes can be synthesized, accordingly it is also possible to solve the problems, such as manually to acquire picture difficulty under special screne larger.
In addition, during training picture synthetic model, by synthesising picture and first group of picture or described second Group picture in any one picture be compared, judge whether the picture synthetic model restrains, that is to say, that compare when Time does not need to add label to reference base picture in advance, using the method for unsupervised training, therefore, for this angle, It can also be improved the efficiency of trained picture synthetic model.
Embodiment two
Refering to the device 20 for proposing a kind of trained picture synthetic model shown in Fig. 2A, in the embodiment of the present invention, comprising:
Characteristic extracting module 200, for first group of picture and second group of picture to be inputted picture synthetic model, using described Picture synthetic model is extracted from first group of picture and second group of picture respectively indicates the abstract of same physical meaning Feature, first group of picture are simulation picture, and second group of picture is true figure corresponding with first group of picture Piece;
Synthesis module 210, the abstract characteristics for being obtained according to extraction are to first group of picture or second group picture Piece is handled, and synthesising picture is obtained;
Comparison module 220, if being third group picture for the synthesising picture, the third group picture is to described first Group picture is handled, then is compared the synthesising picture with any one picture in second group of picture;If The synthesising picture is the 4th group of picture, and the 4th group of picture is handled second group of picture, then will be described Synthesising picture is compared with any one picture in first group of picture;
Judgment module 230, for judging whether the picture synthetic model restrains according to comparison result, if the picture closes It is restrained at model, then picture synthetic model described in deconditioning.
The scheme of foregoing description is that picture synthetic model convergence is determined according to comparison result, and then deconditioning figure The case where piece synthetic model.But in practical applications, there is also determine picture synthetic model according to comparison result not restrain The case where, at this time, it may be necessary to be adjusted picture synthetic model, continue to train picture synthetic model, circulation is executed and synthesized Picture, the step of comparing and judging, until picture synthetic model is restrained, therefore, described device 20 further include:
Module 240 is adjusted, if not restraining for the picture synthetic model, adjusts the picture synthetic model, and to tune Picture synthetic model input picture after whole is with the training picture synthetic model, until the picture synthetic model is restrained, such as Shown in Fig. 2 B.
It should be noted that the picture synthetic model that picture is inputted in characteristic extracting module 200 can be initial model, Can be to initial model after training but not convergent model.
For example, the first picture synthetic model is initial model, when starting to train, first judge that the first picture synthesizes mould Whether type restrains, if convergence, deconditioning model adjust to obtain second picture if not restraining to the first picture synthetic model Synthetic model, and judge whether second picture synthetic model restrains, if convergence, deconditioning model;If not restraining, to second Picture synthetic model adjusts to obtain third picture synthetic model, and so on, until the picture synthetic model convergence that training obtains.
In the embodiment of the present invention, adjustment module 240 adjusts the picture synthetic model, and synthesizes mould to picture adjusted When type inputs picture described with training picture synthetic model, it is preferable that can be in the following way:
First group of picture and second group of picture are inputted to picture synthetic model adjusted, and mould is synthesized with the training picture Type.
Certainly, the whether convergent accuracy of picture synthetic model is judged in order to further improve, can also use other Sample is trained, and e.g., inputs M group picture and M+1 group picture to picture synthetic model adjusted with the training picture Synthetic model, M are not equal to 1, and the M group picture is simulation picture, and the M+1 group picture is and the M group picture phase Corresponding true picture.
That is, training process can use identical training each time during training picture synthetic model Sample such as all uses first group of picture and second group of picture;Alternatively, new training sample can also be used, M group is such as all used Picture and M+1 group picture;Or periodically adjusting training sample, the new training sample of such as every training 5 times uses carry out Training, it is of course also possible to use other modes, are not specifically limited herein, as long as can achieve the purpose that trained.
Synthesis module 210 obtains there are two types of the modes of synthesising picture, and one is the abstract characteristics obtained according to extraction to institute State what first group of picture was handled, it is assumed that be A class synthesising picture, another kind is the abstract characteristics pair obtained according to extraction What second group of picture was handled, it is assumed that be B class synthesising picture, the type of synthesising picture is different, comparison module 220 Used reference base picture is different when relatively, and e.g., synthesising picture is A class synthesising picture, and comparison module 220 closes A class It is compared at picture with any one picture in second group of picture;Synthesising picture is B class synthesising picture, comparison module 220 are compared B class synthesising picture with any one picture in first group of picture.Due to being directed to different types of conjunction At picture, used reference base picture is different during executing comparison step, it is thus possible to improve judging that picture closes At the constringent accuracy of model.
It should be noted that synthesis module 210 obtains there are two types of the modes of synthesising picture, but synthesis mould it is not limited to Block 210 can all generate A class synthesising picture and B class synthesising picture simultaneously each time, in practical applications, may only generate the conjunction of A class At picture, at this point, the execution of comparison module 220 is by any one picture in the synthesising picture and second group of picture It is compared this operation, judgment module 230 judges whether picture synthetic model restrains according to this comparison result in turn;Or Person may only generate B class synthesising picture in practical applications, at this point, the execution of comparison module 220 is by the synthesising picture It is compared this operation with any one picture in first group of picture, judgment module 230 compares according to this in turn As a result judge whether picture synthetic model restrains;Or constringent accuracy is judged in order to improve, while generating the conjunction of A class At picture and B class synthesising picture, at this point, both executed will be in the synthesising picture and first group of picture for comparison module 220 Any one picture is compared this operation, also executes any one in the synthesising picture and second group of picture Picture is compared this operation, and judgment module 230 judges whether picture synthetic model is received according to both comparison results in turn It holds back, only in the case where all determining the convergent situation of picture synthetic model according to both comparison results, just can determine that picture synthesizes Model convergence.
Preferably, as shown in Figure 2 C, the characteristic extracting module 200 includes decoder module 200a and subcharacter extraction module 200b, in which:
The decoder module 200a, for decoding obtain first respectively to first group of picture and second group of picture Group data and second group of data;
The subcharacter extraction module 200b, for being extracted respectively from first group of data and second group of data Indicate the abstract characteristics of same physical meaning.
Preferably, as shown in Figure 2 D, the synthesis module 210 includes sameization module 210a, processing module 210b and volume Code module 210c, in which:
The sameization module 210a, the abstract characteristics for that will extract respectively are characterized using same physical parameter;
The processing module 210b indicates that this is abstracted for replacing the same physical parameter in first group of data The parameter of feature, or the same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data;
The coding module 210c, for obtaining synthesising picture to replaced first group of data encoding;Or to replacement after Second group of data encoding, obtain synthesising picture.
Preferably, coding module 210c is to treated first group of data or treated that second group of data is compiled When code, predictive coding can be used, or transform domain coding technology can also be used, be not specifically limited herein.
In the embodiment of the present invention, the abstract characteristics extracted can be the highest network layer in every group of data, will respectively Obtained highest layer network is extracted to characterize using same physical parameter.
For example, first group of picture is the simulation picture about mansion A, first group of data that first group of picture is decoded Including 10 layer networks, second group of picture is the true picture about mansion A, the second group of data decoded to second group of picture Including 8 layer networks, then the 8th layer network in the 10th layer network and second group of data in first group of data is used into identical object Manage parameter characterization.
Above-mentioned example is that the network number of plies of the data decoded with first group of picture and second group of picture decode The network number of plies difference of data is illustrated, but it's not limited to that, the network for the data that first group of picture decodes The network number of plies for the data that the number of plies and second group of picture decode may be identical, and such as corresponding network of first group of data includes 10 Layer, the corresponding network of second group of data also includes 10 layers.
Certainly, as long as extracting the abstract characteristics for indicating same physical meaning in two groups of data when extracting, Also what not specific restriction was extracted is which layer network in network.
Largely there is randomness and multifarious picture since emulated physics engine can synthesize, the present invention is real It applies in example, further includes model building module 250, acquisition module 260 as shown in Figure 2 E further, in which:
The model building module 250 establishes threedimensional model for utilizing emulated physics engine;
The acquisition module 260 for virtual camera to be arranged in the emulated physics engine, and uses the virtualphase Machine shoots the threedimensional model, obtains first group of picture.
In this way, first group of picture obtained in the embodiment of the present invention has diversity.
Preferably, the acquisition module 260 is specifically used for:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
Preferably, the comparison module 220 is specifically used for: calculating with reference to any one picture in first group of picture First picture of the synthesising picture synthesizes penalty values;Or with reference to described in any one picture calculating in second group of picture The second picture of synthesising picture synthesizes penalty values.
Preferably, as shown in Figure 2 F, the judgment module 230 includes convergence judgment module 230a and determining module 230b, Wherein:
The convergence judgment module 230a, first picture synthesis penalty values and/or institute for judging to be calculated State whether second picture synthesis penalty values reach convergence threshold;
The determining module 230b, if judging first picture being calculated for the convergence judgment module 230a Synthesis penalty values and/or second picture synthesis penalty values reach convergence threshold, determine the picture synthetic model convergence.
Further, if first picture synthesis penalty values that the convergence judgment module 230a judgement is calculated And/or the second picture synthesis penalty values are not up to convergence threshold, the determining module 230b determines the picture synthesis mould Type does not restrain, and continues to train picture synthetic model.
Above-mentioned give synthesizes penalty values by the picture of picture to judge the whether convergent example of picture synthetic model, when It so, can be with other modes to determine whether convergence, or can judge the whether convergent ginseng of picture synthetic model by other Number is not limited to the example to be judged.
It should be noted that the convergence threshold in the embodiment of the present invention can become with the difference of application scenarios Change, for example, convergence threshold is smaller under the higher scene of accuracy for the picture for requiring picture synthetic model to synthesize, such as 8.0, In the case where requiring the accuracy of picture of picture synthetic model synthesis not to be very high scene, convergence threshold can slightly increase, such as 8.2 are adjusted to, certainly, 8.0 and 8.2 be all specific example, and it's not limited to that.
Disclosed in one cycle, using for the comparison result of third group picture and/or the 4th group of picture come Judge whether picture synthetic model restrains, in order to further improve the accuracy of judgement, in the embodiment of the present invention, give as Under type:
The characteristic extracting module 200 is also used to, and third group picture is inputted the picture synthetic model, utilizes the figure Piece synthetic model extracts the abstract spy for indicating same physical meaning from first group of picture and the third group picture respectively Sign;
The synthesis module 210 is also used to, and is handled according to obtained abstract characteristics are extracted the third group picture, Synthesising picture is obtained, is the 5th group of picture according to the synthesising picture that the third group picture obtains;
The comparison module 220 is also used to, by any one figure in the 5th group of picture and first group of picture Piece is compared;
The judgment module 230 is specifically used for, according to for the third group picture, the 4th group of picture and described the The comparison result of at least one set of picture in five groups of pictures, judges whether the picture synthetic model restrains.
The characteristic extracting module 200 is specifically used for, and decodes respectively to first group of picture and the third group picture Obtain first group of data and third group data;
Extract the abstract characteristics for indicating same physical meaning respectively from first group of data and the third group data.
The synthesis module 210 is specifically used for, and the abstract characteristics extracted respectively are characterized using same physical parameter;
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in the third group data, and to replacement after Third group data encoding, obtain the 5th group of picture.
Similarly, the 4th group of picture of synthesis can also be inputted into picture synthetic model, obtains synthesising picture, such as the 6th group picture Piece, at this point, when judging whether the picture synthetic model restrains according to comparison result, it can be in the following way:
According in the third group picture, the 4th group of picture, the 5th group of picture and the 6th group of picture The comparison result of at least one set of picture, judges whether the picture synthetic model restrains.
In this scenario, picture synthetic model is trained according to simulation picture and corresponding true picture, until picture closes It is restrained at model, since the similarity of the picture and true picture of the synthesis of convergent picture synthetic model is higher, in this way, can adopt Use the picture of convergent picture synthetic model synthesis as the training sample for needing to use in deep learning method, there is no need to realities The a large amount of training sample for needing to use in the sampling depth learning method of border, it is thus possible to improve training effectiveness, on the other hand also The picture under different scenes can be synthesized, accordingly it is also possible to solve the problems, such as manually to acquire picture difficulty under special screne larger.
In addition, during training picture synthetic model, by synthesising picture and first group of picture or described second Group picture in any one picture be compared, judge whether the picture synthetic model restrains, that is to say, that compare when Time does not need to add label to reference base picture in advance, using the method for unsupervised training, therefore, for this angle, It can also be improved the efficiency of trained picture synthetic model.
Embodiment three
A kind of electronic equipment is also proposed in the embodiment of the present invention, comprising:
One or more processors;
Storage device is stored thereon with one or more programs, when one or more of programs are by one or more A processor executes, so that one or more of processors realize the method such as the various preferable examples below embodiment one.
Example IV
A kind of computer-readable medium is also proposed in the embodiment of the present invention, is stored thereon with computer program, wherein described The method such as the various preferable examples below embodiment one is realized when program is executed by processor.
It will appreciated by the skilled person that whole or certain steps, system, dress in method disclosed hereinabove Functional module/unit in setting may be implemented as software, firmware, hardware and its combination appropriate.In hardware embodiment, Division between the functional module/unit referred in the above description not necessarily corresponds to the division of physical assemblies;For example, one Physical assemblies can have multiple functions or a function or step and can be executed by several physical assemblies cooperations.Certain objects Reason component or all physical assemblies may be implemented as by processor, such as central processing unit, digital signal processor or micro process The software that device executes, is perhaps implemented as hardware or is implemented as integrated circuit, such as specific integrated circuit.Such software Can be distributed on a computer-readable medium, computer-readable medium may include computer storage medium (or non-transitory be situated between Matter) and communication media (or fugitive medium).As known to a person of ordinary skill in the art, term computer storage medium includes In any method or skill for storing information (such as computer readable instructions, data structure, program module or other data) The volatile and non-volatile implemented in art, removable and nonremovable medium.Computer storage medium includes but is not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disc (DVD) or other optical disc storages, magnetic Box, tape, disk storage or other magnetic memory apparatus or it can be used for storing desired information and can be visited by computer Any other medium asked.In addition, known to a person of ordinary skill in the art be, communication media generally comprises computer-readable Other numbers in the modulated data signal of instruction, data structure, program module or such as carrier wave or other transmission mechanisms etc According to, and may include any information delivery media.
Example embodiment has been disclosed herein, although and use concrete term, they are only used for simultaneously only should It is interpreted general remark meaning, and is not used in the purpose of limitation.In some instances, aobvious to those skilled in the art and Be clear to, unless otherwise expressly stated, the feature that description is combined with specific embodiment that otherwise can be used alone, characteristic and/ Or element, or the feature, characteristic and/or element of description can be combined with other embodiments and be applied in combination.Therefore, art technology Personnel will be understood that, in the case where not departing from the scope of the present disclosure illustrated by the attached claims, can carry out various forms With the change in details.

Claims (18)

1. a kind of method of trained picture synthetic model characterized by comprising
First group of picture and second group of picture are inputted into picture synthetic model, using the picture synthetic model respectively from described the The abstract characteristics for indicating same physical meaning are extracted in one group of picture and second group of picture, and are abstracted according to what extraction obtained Feature handles first group of picture or second group of picture, obtains synthesising picture;First group of picture is mould Quasi- picture, second group of picture are true picture corresponding with first group of picture;
If the synthesising picture is third group picture, the third group picture is handled first group of picture, then The synthesising picture is compared with any one picture in second group of picture;If the synthesising picture is the 4th group Picture, the 4th group of picture is handled second group of picture, then by the synthesising picture and described first group Any one picture in picture is compared;
Judge whether the picture synthetic model restrains according to comparison result, if the picture synthetic model is restrained, stops instructing Practice the picture synthetic model.
2. the method according to claim 1, wherein using the picture synthetic model respectively from described first group The abstract characteristics for indicating same physical meaning are extracted in picture and second group of picture, comprising:
First group of picture and second group of picture are decoded obtain first group of data and second group of data respectively;
Extract the abstract characteristics for indicating same physical meaning respectively from first group of data and second group of data.
3. according to the method described in claim 2, it is characterized in that, according to obtained abstract characteristics are extracted to first group picture Piece or second group of picture are handled, and synthesising picture is obtained, comprising:
The abstract characteristics extracted respectively are characterized using same physical parameter;
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in first group of data, and to replaced One group of data encoding, obtains synthesising picture;Or
The same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data, and to replaced Two groups of data encodings, obtain synthesising picture.
4. the method according to claim 1, wherein first group of picture obtains in the following way:
Using emulated physics engine, threedimensional model is established;
Virtual camera is set in the emulated physics engine;
The threedimensional model is shot using the virtual camera, obtains first group of picture.
5. according to the method described in claim 4, it is characterized in that, being obtained using the virtual camera shooting threedimensional model To first group of picture, comprising:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
6. the method according to claim 1, wherein by appointing in the synthesising picture and first group of picture A picture of anticipating is compared, comprising:
Penalty values are synthesized with reference to the first picture that any one picture in first group of picture calculates the synthesising picture;
The synthesising picture is compared with any one picture in second group of picture, comprising:
Penalty values are synthesized with reference to the second picture that any one picture in second group of picture calculates the synthesising picture;
Judge whether the picture synthetic model restrains according to comparison result, comprising:
Judge whether first picture being calculated synthesis penalty values and/or second picture synthesis penalty values reach receipts Hold back threshold value;
If first picture synthesis penalty values and/or second picture synthesis penalty values that are calculated reach convergence threshold Value determines the picture synthetic model convergence.
7. method according to claim 1-6, which is characterized in that the method also includes:
If the picture synthetic model is not restrained, the picture synthetic model is adjusted, and defeated to picture synthetic model adjusted Enter picture with the training picture synthetic model, until the picture synthetic model is restrained.
8. the method according to claim 1, wherein whether judging the picture synthetic model according to comparison result Before convergence, further includes:
The third group picture is inputted into the picture synthetic model, using the picture synthetic model respectively from described first group The abstract characteristics for indicating same physical meaning, and the abstract characteristics obtained according to extraction are extracted in picture and the third group picture The third group picture is handled, synthesising picture is obtained, is the 5th according to the synthesising picture that the third group picture obtains Group picture;
The 5th group of picture is compared with any one picture in first group of picture;
Judge whether the picture synthetic model restrains according to comparison result, comprising:
According to the ratio at least one set of picture in the third group picture, the 4th group of picture and the 5th group of picture Compared with as a result, judging whether the picture synthetic model restrains.
9. a kind of device of trained picture synthetic model characterized by comprising
Characteristic extracting module is closed for first group of picture and second group of picture to be inputted picture synthetic model using the picture Extract the abstract characteristics for indicating same physical meaning, institute from first group of picture and second group of picture respectively at model First group of picture is stated as simulation picture, second group of picture is true picture corresponding with first group of picture;
Synthesis module, for according to extract obtained abstract characteristics to first group of picture or second group of picture at Reason, obtains synthesising picture;
Comparison module, if being third group picture for the synthesising picture, the third group picture is to first group of picture What processing obtained, then the synthesising picture is compared with any one picture in second group of picture;If the conjunction It is the 4th group of picture at picture, the 4th group of picture is handled second group of picture, then by the composite diagram Piece is compared with any one picture in first group of picture;
Judgment module, for judging whether the picture synthetic model restrains according to comparison result, if the picture synthetic model It restrains, then picture synthetic model described in deconditioning.
10. device according to claim 9, which is characterized in that the characteristic extracting module includes decoder module and Zi Te Levy extraction module, in which:
The decoder module, for first group of picture and second group of picture are decoded respectively obtain first group of data and Second group of data;
The subcharacter extraction module indicates identical for extracting respectively from first group of data and second group of data The abstract characteristics of physical meaning.
11. device according to claim 10, which is characterized in that the synthesis module includes sameization module, processing mould Block and coding module, in which:
The sameization module, the abstract characteristics for that will extract respectively are characterized using same physical parameter;
The processing module, for the same physical parameter to be replaced to the ginseng for indicating the abstract characteristics in first group of data Number, or the same physical parameter is replaced to the parameter that the abstract characteristics are indicated in second group of data;
The coding module, for obtaining synthesising picture to replaced first group of data encoding;Or to replaced second group Data encoding obtains synthesising picture.
12. device according to claim 9, which is characterized in that further include model building module, acquisition module, in which:
The model building module establishes threedimensional model for utilizing emulated physics engine;
The acquisition module shoots institute using the virtual camera for virtual camera to be arranged in the emulated physics engine Threedimensional model is stated, first group of picture is obtained.
13. device according to claim 12, which is characterized in that the acquisition module is specifically used for:
Ray tracing technology is based on using the virtual camera and shoots the threedimensional model, obtains first group of picture.
14. device according to claim 9, which is characterized in that the comparison module is specifically used for:
Penalty values are synthesized with reference to the first picture that any one picture in first group of picture calculates the synthesising picture;Or Penalty values are synthesized with reference to the second picture that any one picture in second group of picture calculates the synthesising picture;
The judgment module includes convergence judgment module and determining module, in which:
The convergence judgment module, first picture synthesis penalty values and/or second figure for judging to be calculated Whether piece synthesis penalty values reach convergence threshold;
The determining module, if first picture for the convergence judgment module judgement to be calculated synthesizes penalty values And/or the second picture synthesis penalty values reach convergence threshold, determine the picture synthetic model convergence.
15. according to the described in any item devices of claim 9-14, which is characterized in that described device further include:
Module is adjusted, if not restraining for the picture synthetic model, adjusts the picture synthetic model, and to figure adjusted Piece synthetic model inputs picture with the training picture synthetic model, until the picture synthetic model is restrained.
16. device according to claim 9, which is characterized in that the characteristic extracting module is also used to, by the third group Picture inputs the picture synthetic model, using the picture synthetic model respectively from first group of picture and the third group The abstract characteristics for indicating same physical meaning are extracted in picture;
The synthesis module is also used to, and is handled according to obtained abstract characteristics are extracted the third group picture, is closed It is the 5th group of picture according to the synthesising picture that the third group picture obtains at picture;
The comparison module is also used to, and the 5th group of picture is compared with any one picture in first group of picture Compared with;
The judgment module is specifically used for, according to for the third group picture, the 4th group of picture and the 5th group picture The comparison result of at least one set of picture in piece, judges whether the picture synthetic model restrains.
17. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs, when one or more of programs are by one or more of places It manages device to execute, so that one or more of processors realize method a method as claimed in any one of claims 1-8.
18. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor Existing method a method as claimed in any one of claims 1-8.
CN201811636004.3A 2018-12-29 2018-12-29 Method and device for training picture synthesis model Active CN109726760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811636004.3A CN109726760B (en) 2018-12-29 2018-12-29 Method and device for training picture synthesis model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811636004.3A CN109726760B (en) 2018-12-29 2018-12-29 Method and device for training picture synthesis model

Publications (2)

Publication Number Publication Date
CN109726760A true CN109726760A (en) 2019-05-07
CN109726760B CN109726760B (en) 2021-04-16

Family

ID=66297907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811636004.3A Active CN109726760B (en) 2018-12-29 2018-12-29 Method and device for training picture synthesis model

Country Status (1)

Country Link
CN (1) CN109726760B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174952A1 (en) * 2021-02-22 2022-08-25 Dspace Gmbh Method for parameterizing an image synthesis from a 3-d model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation
CN107767328A (en) * 2017-10-13 2018-03-06 上海交通大学 The moving method and system of any style and content based on the generation of a small amount of sample
CN107967463A (en) * 2017-12-12 2018-04-27 武汉科技大学 A kind of conjecture face recognition methods based on composograph and deep learning
CN108898676A (en) * 2018-06-19 2018-11-27 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation
CN107767328A (en) * 2017-10-13 2018-03-06 上海交通大学 The moving method and system of any style and content based on the generation of a small amount of sample
CN107967463A (en) * 2017-12-12 2018-04-27 武汉科技大学 A kind of conjecture face recognition methods based on composograph and deep learning
CN108898676A (en) * 2018-06-19 2018-11-27 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174952A1 (en) * 2021-02-22 2022-08-25 Dspace Gmbh Method for parameterizing an image synthesis from a 3-d model

Also Published As

Publication number Publication date
CN109726760B (en) 2021-04-16

Similar Documents

Publication Publication Date Title
Neimark et al. Video transformer network
US11487999B2 (en) Spatial-temporal reasoning through pretrained language models for video-grounded dialogues
Kuhnle et al. Shapeworld-a new test methodology for multimodal language understanding
CN110059465B (en) Identity verification method, device and equipment
US9026853B2 (en) Enhancing test scripts
US11334671B2 (en) Adding adversarial robustness to trained machine learning models
CN108304921A (en) The training method and image processing method of convolutional neural networks, device
CN106203333A (en) Face identification method and system
CN109814955A (en) The method, apparatus and electronic equipment that battle array determines
CN105224984A (en) A kind of data category recognition methods based on deep neural network and device
CN115442543B (en) Method, device, equipment and storage medium for synthesizing virtual image speaking video
CN112800203B (en) Question-answer matching method and system fusing text representation and knowledge representation
CN107844426A (en) Automated testing method and device, storage medium, electronic equipment
US11017270B2 (en) Method and apparatus for image processing for vehicle
CN113177538A (en) Video cycle identification method and device, computer equipment and storage medium
Panos et al. First session adaptation: A strong replay-free baseline for class-incremental learning
CN109726760A (en) The method and device of training picture synthetic model
Zhang et al. AutoDistill: an end-to-end framework to explore and distill hardware-efficient language models
CN111291780A (en) Cross-domain network training and image recognition method
Ding et al. Realgen: Retrieval augmented generation for controllable traffic scenarios
Pondenkandath et al. Leveraging random label memorization for unsupervised pre-training
CN116071811B (en) Face information verification method and device
CN108040289A (en) A kind of method and device of video playing
Lee et al. Multi-Step Training Framework Using Sparsity Training for Efficient Utilization of Accumulated New Data in Convolutional Neural Networks
CN115473706A (en) Deep reinforcement learning intelligent penetration test method and device based on simulation learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant