Summary of the invention
In view of this, the embodiment of the present disclosure provides a kind of fidelity image generation method, device and electronic equipment, at least partly
Solve problems of the prior art.
In a first aspect, the embodiment of the present disclosure provides a kind of fidelity image generation method, comprising:
Acquisition includes the multiple images of target object, can determine one of the target object based on described multiple images
Or multiple continuous actions;
The shape of the texture maps and element-specific of specific region on the target object is obtained in described multiple images about
Shu Tu;
Based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, described in building
The reconstruction model of target object;
Using the reconstruction model, the fidelity image to match with the input information of the target object, the guarantor are generated
True image contains the one or more prediction actions to match with the input information.
According to a kind of specific implementation of the embodiment of the present disclosure, the acquisition includes the multiple images of target object, packet
It includes:
Video acquisition is carried out to the target object using picture pick-up device, obtains the video file comprising multiple video frames;
The video frame of selected part or whole from the video file forms multiple figures comprising the target object
Picture.
According to a kind of specific implementation of the embodiment of the present disclosure, the acquisition includes the multiple images of target object, packet
It includes:
For the casting sample of target object setting different-style;
The target object is obtained for the Sample video of different-style casting sample;
The multiple images comprising target object are obtained from the Sample video.
It is described that the target pair is obtained in described multiple images according to a kind of specific implementation of the embodiment of the present disclosure
As the texture maps of upper specific region and the shape constraining figure of element-specific, comprising:
3D reconstruction is carried out to the upper specific region of the target object, obtains 3D section object;
The three-dimensional grid of the 3D section object is obtained, the three-dimensional grid includes preset coordinate value;
Based on the pixel value on different three-dimensional grid coordinates, the texture maps of the specific region are determined.
It is described that the target pair is obtained in described multiple images according to a kind of specific implementation of the embodiment of the present disclosure
As the texture maps of upper specific region and the shape constraining figure of element-specific, further includes:
The critical point detection for being directed to element-specific is executed in described multiple images, is obtained relevant to the element-specific
Multiple key points;
The shape constraining figure for describing the element-specific is formed based on the multiple key point.
It is described to be based on the texture maps, the shape constraining figure according to a kind of specific implementation of the embodiment of the present disclosure
And the two-dimensional image information of described multiple images, construct the reconstruction model of the target object, comprising:
The convolutional neural networks of the training reconstruction model are set, include the mesh using convolutional neural networks training
Mark the image of object, wherein the input of the last layer of the convolutional neural networks is consistent with the input of the node of input layer.
It is described to include institute using convolutional neural networks training according to a kind of specific implementation of the embodiment of the present disclosure
State the image of target object, comprising:
Measure prediction error with mean square error function, the prediction error is used to describe to export vivid frame and artificial
The difference of acquisition frame;
The prediction error is reduced using backpropagation function.
It is described to utilize the reconstruction model according to a kind of specific implementation of the embodiment of the present disclosure, it generates and the mesh
The fidelity image that the input information of mark object matches, comprising:
The input information for being directed to the target object is obtained, the input information is parsed, the first parsing knot is obtained
Fruit;
Model quantization is carried out to first parsing result, obtains target object movement quantization vector;
Generate multiple fidelity images with the movement quantization Vectors matching.
According to a kind of specific implementation of the embodiment of the present disclosure, the generation is more with the movement quantization Vectors matching
A fidelity image, comprising:
Using the texture maps as the fixed input of the fidelity image;
Based on element value in movement quantization vector, the kinematic constraint value of the element-specific is determined;
By continuous kinematic constraint value and fixed texture maps, multiple fidelities with the input information matches are predicted
Image.
Second aspect, the embodiment of the present disclosure provide a kind of fidelity image generating means, comprising:
Acquisition module can determine described for acquiring the multiple images comprising target object based on described multiple images
One or more continuous actions of target object;
Module is obtained, for determining on the target object texture maps of specific region and specific in described multiple images
The shape constraining figure of element;
Module is constructed, for the two dimensional image based on the texture maps, the shape constraining figure and described multiple images
Information constructs the reconstruction model of the target object;
Generation module generates the guarantor to match with the input information of the target object for utilizing the reconstruction model
True image, the fidelity image contain the one or more prediction actions to match with the input information.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor
It executes, so that at least one processor is able to carry out the guarantor in any implementation of aforementioned first aspect or first aspect
True image generating method.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of non-transient computer readable storage medium, the non-transient meter
Calculation machine readable storage medium storing program for executing stores computer instruction, and the computer instruction is for making the computer execute aforementioned first aspect or the
Fidelity image generation method in any implementation of one side.
5th aspect, the embodiment of the present disclosure additionally provide a kind of computer program product, which includes
The calculation procedure being stored in non-transient computer readable storage medium, the computer program include program instruction, when the program
When instruction is computer-executed, the computer is made to execute the fidelity in aforementioned first aspect or any implementation of first aspect
Image generating method.
Fidelity image in the embodiment of the present disclosure generates scheme, including acquiring the multiple images comprising target object, is based on
Described multiple images can determine one or more continuous actions of the target object;Described in being obtained in described multiple images
The shape constraining figure of the texture maps and element-specific of specific region on target object;Based on the texture maps, the shape constraining
The two-dimensional image information of figure and described multiple images, constructs the reconstruction model of the target object;Using the reconstruction model,
The fidelity image to match with the input information of the target object is generated, the fidelity image contains and the input information
The one or more prediction actions to match.By the processing scheme of the application, can really simulate for input information phase
Matched animated image, improves user experience.
Specific embodiment
The embodiment of the present disclosure is described in detail with reference to the accompanying drawing.
Illustrate embodiment of the present disclosure below by way of specific specific example, those skilled in the art can be by this specification
Disclosed content understands other advantages and effect of the disclosure easily.Obviously, described embodiment is only the disclosure
A part of the embodiment, instead of all the embodiments.The disclosure can also be subject to reality by way of a different and different embodiment
It applies or applies, the various details in this specification can also be based on different viewpoints and application, in the spirit without departing from the disclosure
Lower carry out various modifications or alterations.It should be noted that in the absence of conflict, the feature in following embodiment and embodiment can
To be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are without creative efforts
Every other embodiment obtained belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian
And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein
And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein
Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways.
For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make
With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or
Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way
Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in schema are drawn
System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also
It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields
The skilled person will understand that the aspect can be practiced without these specific details.
The embodiment of the present disclosure provides a kind of fidelity image generation method.Fidelity image generation method provided in this embodiment can
To be executed by a computing device, which can be implemented as software, or be embodied as the combination of software and hardware, the meter
It calculates device and can integrate and be arranged in server, terminal device etc..
Referring to Fig. 1, a kind of fidelity image generation method that the embodiment of the present disclosure provides includes the following steps:
S101, acquisition include the multiple images of target object, can determine the target object based on described multiple images
One or more continuous actions.
The movement of target object and expression are the scheme of the disclosure contents to be simulated and be predicted, as an example,
Target object can be the people for being really able to carry out network casting, be also possible to other pairs with information spreading function
As, such as TV announcer, news program announcer, the teacher to give lessons etc..
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when
When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more
Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or
Multiple direct broadcast bands).At this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man
Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to
Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object
Content is presided over, also can recorde target object for the casting record of one section of news.
To including multiple frame images in the video of target object acquisition, it can be chosen from the frame image of video and multiple include
The image of one or more target object continuous actions constitutes image collection.It, can be pre- by being trained to the image collection
Survey the movement and expression that different input contents are directed to simulated target object.
S102 obtains the shape of the texture maps and element-specific of specific region on the target object in described multiple images
Shape constraints graph.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object
Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose
Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes
Deng) modeled.
Specifically, obtaining the texture maps (for example, face texture) of the specific region of target object and spy in multiple images
The key point (for example, the face such as eyes, mouth key point) of element is determined, to constitute target object texture maps and element-specific
Shape constraining figure.
The texture of specific region can obtain in such a way that 3D is rebuild, and by taking face as an example, pass through the side of 3D human face rebuilding
Formula obtains face three-dimensional grid, and the corresponding face pixel value of all three dimensional network lattice points constitutes target object (for example, main broadcaster)
Face texture.Wherein, 3D human face rebuilding can be realized using existing technology.
The shape constraining figure of element-specific can be realized by the way of critical point detection, by taking eyes and mouth as an example,
Eyes and mouth key point are obtained by existing face critical point detection algorithm.Crucial dot is separately connected around eyes/mouth
At eyes/mouth enclosed region.Eye pupil area filling blue, eyes residue other parts filling white, mouth closure
Area filling is red.The specific member of image construction after being filled color to the enclosed region that the key point of element-specific is formed
The shape constraining figure of element.
S103, based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, building
The reconstruction model of the target object.
It, can be in conjunction with the multiple figures for generating texture maps and shape constraining figure after obtaining texture maps, shape constraining figure
The reconstruction model for target object is trained by the convolutional neural networks of setting and constructed to picture.
Convolutional neural networks structure may include several convolutional layers, pond layer, full articulamentum and classifier.The wherein volume
The output layer of the last layer of product neural network structure is as the interstitial content of input layer, so as to directly export generation mesh
Mark the video frame of object image.
During being trained to the convolutional neural networks, prediction error is measured with mean square error function,
Difference i.e. between prediction output target object image frame and artificial acquisition target object image frame, for the difference, using anti-
The difference is reduced to propagation function.
S104 generates the fidelity image to match with the input information of the target object, institute using the reconstruction model
It states fidelity image and contains the one or more prediction actions to match with the input information.
After reconstruction model is provided with, mesh can be predicted using the reconstruction model by way of video cartoon
Mark the various movements and expression of object in video.Specifically, can be generated by way of generating fidelity image comprising mesh
The video file of object action and expression is marked, fidelity image can be used as the whole frame or key frame of video file, fidelity image
Contain the set of the multiple images of the one or more prediction actions to match with the input information.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information
It is converted into the parameter to match with the texture maps and the shape constraining figure after crossing data parsing, utilizes the institute obtained after training
Reconstruction model is stated, is finally completed the generation for guaranteeing image by calling the texture maps and the shape constraining figure.
In the stage predicted, the shape of the specific region texture maps and element-specific that can provide target object is about
Beam, utilizes trained reconstruction model, predicts the image information of two-dimentional main broadcaster's image, about by the shape of continuous element-specific
Beam and the texture of fixed specific region predict continuous main broadcaster and broadcast image as input.
During realizing step S101, referring to fig. 2, according to a kind of specific implementation of the embodiment of the present disclosure, adopt
Collection includes the multiple images of target object, be may include steps of:
S201 carries out video acquisition to the target object using picture pick-up device, obtains the video comprising multiple video frames
File.
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when
When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more
Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or
Multiple direct broadcast bands), at this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man
Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to
Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object
Content is presided over, also can recorde target object for the casting record of one section of news.
S202, the video frame of selected part or whole from the video file form and include the more of the target object
A image.
To including multiple frame images in the video of target object acquisition, it includes one or more for can choosing from video multiple
The image of a target object continuous action constitutes image collection.It, can be with predictions and simulations by being trained to the image collection
Target object is directed to the movement and expression of different input contents.
As another embodiment of step S101, referring to Fig. 3, according to a kind of specific implementation of the embodiment of the present disclosure
Mode, the acquisition include the multiple images of target object, can also include step S301-S303:
S301, for the casting sample of target object setting different-style.
In order to more fully obtain the various movements and expression of target object, different types of broadcast can be preset
Report sample.For example, casting sample may include the different moods such as happiness, sadness, indignation, to obtain more comprehensively training sample
This.
S302 obtains the target object for the Sample video of different-style casting sample.
By carrying out video sampling to target object, available target object is for different-style casting sample
Sample video.
S303 obtains the multiple images comprising target object from the Sample video.
According to the actual needs, multiple figures comprising target object can be chosen from multiple video frames in Sample video
Picture, multiple image some or all of can be in Sample video video frame, can also choose from whole Sample videos
Key frame is as multiple image.
During realizing step S102, according to a kind of specific implementation of the embodiment of the present disclosure, referring to fig. 4,
The shape constraining figure that the texture maps and element-specific of specific region on the target object are obtained in described multiple images, can wrap
It includes:
S401 carries out 3D reconstruction to the upper specific region of the target object, obtains 3D section object.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object
Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose
Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes
Deng) modeled.
S402, obtains the three-dimensional grid of the 3D section object, and the three-dimensional grid includes preset coordinate value.
3D section object describes its specific position by three-dimensional grid, and specific coordinate is arranged to three-dimensional grid for this
Value, such as three-dimensional grid can be described by way of setting planar two dimensional coordinate and spatial altitude coordinate.
S403 determines the texture maps of the specific region based on the pixel value on different three-dimensional grid coordinates.
Pixel value on different three-dimensional grid coordinates can be linked together, form grid plan, the grid plan shape
At the texture maps of specific region.
By the embodiment of step S401-S403, the texture maps of specific region can be formed faster, improve texture
Figure at efficiency.
During step S104 is executed, as a kind of specific embodiment, generated referring to Fig. 5 using reconstruction model
The fidelity image to match with the input information of the target object, may comprise steps of:
S501 obtains the input information for being directed to the target object, parses to the input information, obtain the first solution
Analyse result.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information
It is converted into the first parsing result after crossing data parsing, which includes and the texture maps and the shape constraining figure
The parameter to match, using the reconstruction model obtained after training, by calling the texture maps and the shape constraining figure
To be finally completed the generation for guaranteeing image.
S502 carries out model quantization to first parsing result, obtains target object movement quantization vector.
It include the motion amplitude parameter for element-specific on target object in first parsing result, by taking mouth as an example,
Motion amplitude can be quantified as 1 when mouth all opens, and when mouth is all closed, the amplitude of doing exercises can quantify
It is 0, by the numerical value between quantization 0 and 1, intermediate state of the mouth between opening and being closed completely completely can be described.
S503 generates multiple fidelity images with the movement quantization Vectors matching.
Quantify vector by movement, the fortune of the element-specific on target object can be described in a manner of sequence fidelity image
Dynamic amplitude, the fidelity figure of the special exercise element containing different motion amplitude is continuously stitched and fastened, and is just formd and is included
The prediction result of target object difference movement.
It may include step S5031- specifically, generating multiple fidelity images with the movement quantization Vectors matching
S5033:
S5031 is inputted the texture maps as the fixed of the fidelity image.
It, can be by texture since susceptibility of the texture maps for user is relatively low, therefore during forming fidelity image
Scheme the fixed input of the target object as prediction, that is, in fidelity image, texture maps are remained unchanged.
S5032 determines the kinematic constraint value of the element-specific based on element value in movement quantization vector.
By movement quantization vector in element value, can describe the element-specific on target object in some fidelity
Motion amplitude on image continuously stitches and fastens the fidelity figure of the special exercise element containing different motion amplitude, just
Form the prediction result comprising the movement of target object difference
S5033 is predicted more with the input information matches by continuous kinematic constraint value and fixed texture maps
A fidelity image.
With above method embodiment relative to referring to Fig. 6, the embodiment of the present disclosure also discloses a kind of fidelity image generation
Device 60, comprising:
Acquisition module 601 can determine institute based on described multiple images for acquiring the multiple images comprising target object
State one or more continuous actions of target object.
The movement of target object and expression are the scheme of the disclosure contents to be simulated and be predicted, as an example,
Target object can be the people for being really able to carry out network casting, be also possible to other pairs with information spreading function
As, such as TV announcer, news program announcer, the teacher to give lessons etc..
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when
When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more
Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or
Multiple direct broadcast bands), at this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man
Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to
Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object
Content is presided over, also can recorde target object for the casting record of one section of news.
To including multiple frame images in the video of target object acquisition, it includes one or more for can choosing from video multiple
The image of a target object continuous action constitutes image collection.It, can be with predictions and simulations by being trained to the image collection
Target object is directed to the movement and expression of different input contents.
Obtain module 602, for determined in described multiple images on the target object texture maps of specific region and
The shape constraining figure of element-specific.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object
Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose
Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes
Deng) modeled.
Specifically, obtaining the texture maps (for example, face texture) of the specific region of target object and spy in multiple images
The key point (for example, the face such as eyes, mouth key point) of element is determined, to constitute target object texture maps and element-specific
Shape constraining figure.
The texture of specific region can obtain in such a way that 3D is rebuild, and by taking face as an example, pass through the side of 3D human face rebuilding
Formula obtains face three-dimensional grid, and the corresponding face pixel value of all three dimensional network lattice points constitutes target object (for example, main broadcaster)
Face texture.Wherein, 3D human face rebuilding can be realized using existing technology.
The shape constraining figure of element-specific can be realized by the way of critical point detection, by taking eyes and mouth as an example,
Eyes and mouth key point are obtained by existing face critical point detection algorithm.Crucial dot is separately connected around eyes/mouth
At eyes/mouth enclosed region.Eye pupil area filling blue, eyes residue other parts filling white, mouth closure
Area filling is red.The specific member of image construction after being filled color to the enclosed region that the key point of element-specific is formed
The shape constraining figure of element.
Module 603 is constructed, for the X-Y scheme based on the texture maps, the shape constraining figure and described multiple images
As information, the reconstruction model of the target object is constructed.
It, can be in conjunction with the multiple figures for generating texture maps and shape constraining figure after obtaining texture maps, shape constraining figure
The reconstruction model for target object is trained by the convolutional neural networks of setting and constructed to picture.
Specifically, convolutional neural networks structure may include several convolutional layers, pond layer, full articulamentum and classifier.
Wherein the output layer of the last layer of the convolutional neural networks structure is as the interstitial content of input layer, so as to directly defeated
The video frame of target object image is generated out.
During being trained to the convolutional neural networks, prediction error is measured with mean square error function,
Difference i.e. between prediction output target object image frame and artificial acquisition target object image frame, for the difference, using anti-
The difference is reduced to propagation function.
Generation module 604, for the utilization reconstruction model, what the input information of generation and the target object matched
Fidelity image, the fidelity image contain the one or more prediction actions to match with the input information.
After reconstruction model is provided with, mesh can be predicted using the reconstruction model by way of video cartoon
Mark the various movements and expression of object in video.Specifically, can be generated by way of generating fidelity image comprising mesh
The video file of object action and expression is marked, fidelity image can be used as the whole frame or key frame of video file, fidelity image
Contain the set of the multiple images of the one or more prediction actions to match with the input information.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information
It is converted into the parameter to match with the texture maps and the shape constraining figure after crossing data parsing, utilizes the institute obtained after training
Reconstruction model is stated, is finally completed the generation for guaranteeing image by calling the texture maps and the shape constraining figure.
In the stage predicted, the shape of the specific region texture maps and element-specific that can provide target object is about
Beam, utilizes trained reconstruction model, predicts the image information of two-dimentional main broadcaster's image, about by the shape of continuous element-specific
Beam and the texture of fixed specific region predict continuous main broadcaster and broadcast image as input.
Fig. 6 shown device can it is corresponding execute above method embodiment in content, what the present embodiment was not described in detail
Part, referring to the content recorded in above method embodiment, details are not described herein.
Referring to Fig. 7, the embodiment of the present disclosure additionally provides a kind of electronic equipment 70, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor
It executes, so that at least one processor is able to carry out fidelity image generation method in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of non-transient computer readable storage medium, and the non-transient computer is readable to deposit
Storage media stores computer instruction, and the computer instruction is for executing the computer in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of computer program product, and the computer program product is non-temporary including being stored in
Calculation procedure on state computer readable storage medium, the computer program include program instruction, when the program instruction is calculated
When machine executes, the computer is made to execute the fidelity image generation method in preceding method embodiment.
Below with reference to Fig. 7, it illustrates the structural schematic diagrams for the electronic equipment 70 for being suitable for being used to realize the embodiment of the present disclosure.
Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, Digital Broadcasting Receiver
Device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal are (such as vehicle-mounted
Navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electronics shown in Fig. 7
Equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in fig. 7, electronic equipment 70 may include processing unit (such as central processing unit, graphics processor etc.) 701,
It can be loaded into random access storage according to the program being stored in read-only memory (ROM) 702 or from storage device 708
Program in device (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with the behaviour of electronic equipment 70
Various programs and data needed for making.Processing unit 701, ROM 702 and RAM 703 are connected with each other by bus 704.It is defeated
Enter/export (I/O) interface 705 and is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, figure
As the input unit 706 of sensor, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaking
The output device 707 of device, vibrator etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.It is logical
T unit 709 can permit electronic equipment 70 and wirelessly or non-wirelessly be communicated with other equipment to exchange data.Although showing in figure
The electronic equipment 70 with various devices is gone out, it should be understood that being not required for implementing or having all devices shown.
It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708
It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity
When sub- equipment executes, so that the electronic equipment: obtaining at least two internet protocol addresses;Send to Node evaluation equipment includes institute
State the Node evaluation request of at least two internet protocol addresses, wherein the Node evaluation equipment is internet from described at least two
In protocol address, chooses internet protocol address and return;Receive the internet protocol address that the Node evaluation equipment returns;Its
In, the fringe node in acquired internet protocol address instruction content distributing network.
Alternatively, above-mentioned computer-readable medium carries one or more program, when said one or multiple programs
When being executed by the electronic equipment, so that the electronic equipment: receiving the Node evaluation including at least two internet protocol addresses and request;
From at least two internet protocol address, internet protocol address is chosen;Return to the internet protocol address selected;Wherein,
The fringe node in internet protocol address instruction content distributing network received.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the
One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
It should be appreciated that each section of the disclosure can be realized with hardware, software, firmware or their combination.
The above, the only specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, it is any
Those familiar with the art is in the technical scope that the disclosure discloses, and any changes or substitutions that can be easily thought of, all answers
Cover within the protection scope of the disclosure.Therefore, the protection scope of the disclosure should be subject to the protection scope in claims.