CN110035271A - Fidelity image generation method, device and electronic equipment - Google Patents

Fidelity image generation method, device and electronic equipment Download PDF

Info

Publication number
CN110035271A
CN110035271A CN201910216551.4A CN201910216551A CN110035271A CN 110035271 A CN110035271 A CN 110035271A CN 201910216551 A CN201910216551 A CN 201910216551A CN 110035271 A CN110035271 A CN 110035271A
Authority
CN
China
Prior art keywords
target object
multiple images
image
fidelity
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910216551.4A
Other languages
Chinese (zh)
Other versions
CN110035271B (en
Inventor
郭冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910216551.4A priority Critical patent/CN110035271B/en
Publication of CN110035271A publication Critical patent/CN110035271A/en
Application granted granted Critical
Publication of CN110035271B publication Critical patent/CN110035271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A kind of fidelity image generation method, device and electronic equipment are provided in the embodiment of the present disclosure, belong to technical field of data processing, this method comprises: acquisition includes the multiple images of target object, it can determine one or more continuous actions of the target object based on described multiple images;The shape constraining figure of the texture maps and element-specific of specific region on the target object is obtained in described multiple images;Based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, the reconstruction model of the target object is constructed;Using the reconstruction model, the fidelity image to match with the input information of the target object is generated, the fidelity image contains the one or more prediction actions to match with the input information.By the processing scheme of the application, the authenticity for generating image is improved.

Description

Fidelity image generation method, device and electronic equipment
Technical field
This disclosure relates to which technical field of data processing more particularly to a kind of fidelity image generation method, device and electronics are set It is standby.
Background technique
With the development of network technology, application of the artificial intelligence technology in network scenarios has obtained great promotion.Make For a specific application demand, interacted in more and more network environments using virtual personage, such as straight in network The casting that the virtual main broadcaster of middle offer personalizes to live content is broadcast, and is live streaming for providing necessary guidance, thus The telepresenc and interactivity for enhancing live streaming, improve the effect of network direct broadcasting.
Expression simulation (for example, nozzle type action simulation) technology is one kind of artificial intelligence technology, realizes expression simulation at present Text-driven, natural-sounding driving and audio-video hybrid modeling method are based primarily upon to drive the facial expression of personage.For example, literary The mode of this driving is usually that the text information of input is converted correspondence by TTS (Text to Speech, the conversion of text-language) engine Aligned phoneme sequence, phoneme duration and corresponding speech waveform, corresponding model unit is then selected in model library, by flat Sliding processing finally shows with corresponding synchronized algorithm and inputs the corresponding voice of content of text and human face expression acts.
That there are expression simulations is single for expression in the prior art simulation, the case where being even distorted, and more seems " machine People " is performing, and expression of the fidelity of facial expressions and acts apart from real person also has biggish gap.
Summary of the invention
In view of this, the embodiment of the present disclosure provides a kind of fidelity image generation method, device and electronic equipment, at least partly Solve problems of the prior art.
In a first aspect, the embodiment of the present disclosure provides a kind of fidelity image generation method, comprising:
Acquisition includes the multiple images of target object, can determine one of the target object based on described multiple images Or multiple continuous actions;
The shape of the texture maps and element-specific of specific region on the target object is obtained in described multiple images about Shu Tu;
Based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, described in building The reconstruction model of target object;
Using the reconstruction model, the fidelity image to match with the input information of the target object, the guarantor are generated True image contains the one or more prediction actions to match with the input information.
According to a kind of specific implementation of the embodiment of the present disclosure, the acquisition includes the multiple images of target object, packet It includes:
Video acquisition is carried out to the target object using picture pick-up device, obtains the video file comprising multiple video frames;
The video frame of selected part or whole from the video file forms multiple figures comprising the target object Picture.
According to a kind of specific implementation of the embodiment of the present disclosure, the acquisition includes the multiple images of target object, packet It includes:
For the casting sample of target object setting different-style;
The target object is obtained for the Sample video of different-style casting sample;
The multiple images comprising target object are obtained from the Sample video.
It is described that the target pair is obtained in described multiple images according to a kind of specific implementation of the embodiment of the present disclosure As the texture maps of upper specific region and the shape constraining figure of element-specific, comprising:
3D reconstruction is carried out to the upper specific region of the target object, obtains 3D section object;
The three-dimensional grid of the 3D section object is obtained, the three-dimensional grid includes preset coordinate value;
Based on the pixel value on different three-dimensional grid coordinates, the texture maps of the specific region are determined.
It is described that the target pair is obtained in described multiple images according to a kind of specific implementation of the embodiment of the present disclosure As the texture maps of upper specific region and the shape constraining figure of element-specific, further includes:
The critical point detection for being directed to element-specific is executed in described multiple images, is obtained relevant to the element-specific Multiple key points;
The shape constraining figure for describing the element-specific is formed based on the multiple key point.
It is described to be based on the texture maps, the shape constraining figure according to a kind of specific implementation of the embodiment of the present disclosure And the two-dimensional image information of described multiple images, construct the reconstruction model of the target object, comprising:
The convolutional neural networks of the training reconstruction model are set, include the mesh using convolutional neural networks training Mark the image of object, wherein the input of the last layer of the convolutional neural networks is consistent with the input of the node of input layer.
It is described to include institute using convolutional neural networks training according to a kind of specific implementation of the embodiment of the present disclosure State the image of target object, comprising:
Measure prediction error with mean square error function, the prediction error is used to describe to export vivid frame and artificial The difference of acquisition frame;
The prediction error is reduced using backpropagation function.
It is described to utilize the reconstruction model according to a kind of specific implementation of the embodiment of the present disclosure, it generates and the mesh The fidelity image that the input information of mark object matches, comprising:
The input information for being directed to the target object is obtained, the input information is parsed, the first parsing knot is obtained Fruit;
Model quantization is carried out to first parsing result, obtains target object movement quantization vector;
Generate multiple fidelity images with the movement quantization Vectors matching.
According to a kind of specific implementation of the embodiment of the present disclosure, the generation is more with the movement quantization Vectors matching A fidelity image, comprising:
Using the texture maps as the fixed input of the fidelity image;
Based on element value in movement quantization vector, the kinematic constraint value of the element-specific is determined;
By continuous kinematic constraint value and fixed texture maps, multiple fidelities with the input information matches are predicted Image.
Second aspect, the embodiment of the present disclosure provide a kind of fidelity image generating means, comprising:
Acquisition module can determine described for acquiring the multiple images comprising target object based on described multiple images One or more continuous actions of target object;
Module is obtained, for determining on the target object texture maps of specific region and specific in described multiple images The shape constraining figure of element;
Module is constructed, for the two dimensional image based on the texture maps, the shape constraining figure and described multiple images Information constructs the reconstruction model of the target object;
Generation module generates the guarantor to match with the input information of the target object for utilizing the reconstruction model True image, the fidelity image contain the one or more prediction actions to match with the input information.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor It executes, so that at least one processor is able to carry out the guarantor in any implementation of aforementioned first aspect or first aspect True image generating method.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of non-transient computer readable storage medium, the non-transient meter Calculation machine readable storage medium storing program for executing stores computer instruction, and the computer instruction is for making the computer execute aforementioned first aspect or the Fidelity image generation method in any implementation of one side.
5th aspect, the embodiment of the present disclosure additionally provide a kind of computer program product, which includes The calculation procedure being stored in non-transient computer readable storage medium, the computer program include program instruction, when the program When instruction is computer-executed, the computer is made to execute the fidelity in aforementioned first aspect or any implementation of first aspect Image generating method.
Fidelity image in the embodiment of the present disclosure generates scheme, including acquiring the multiple images comprising target object, is based on Described multiple images can determine one or more continuous actions of the target object;Described in being obtained in described multiple images The shape constraining figure of the texture maps and element-specific of specific region on target object;Based on the texture maps, the shape constraining The two-dimensional image information of figure and described multiple images, constructs the reconstruction model of the target object;Using the reconstruction model, The fidelity image to match with the input information of the target object is generated, the fidelity image contains and the input information The one or more prediction actions to match.By the processing scheme of the application, can really simulate for input information phase Matched animated image, improves user experience.
Detailed description of the invention
It, below will be to needed in the embodiment attached in order to illustrate more clearly of the technical solution of the embodiment of the present disclosure Figure is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for this field For those of ordinary skill, without creative efforts, it can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of fidelity image product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 2 is another fidelity image product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 3 is another fidelity image product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 4 is another fidelity image product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 5 is another fidelity image product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 6 is the fidelity image generating means structural schematic diagram that the embodiment of the present disclosure provides;
Fig. 7 is the electronic equipment schematic diagram that the embodiment of the present disclosure provides.
Specific embodiment
The embodiment of the present disclosure is described in detail with reference to the accompanying drawing.
Illustrate embodiment of the present disclosure below by way of specific specific example, those skilled in the art can be by this specification Disclosed content understands other advantages and effect of the disclosure easily.Obviously, described embodiment is only the disclosure A part of the embodiment, instead of all the embodiments.The disclosure can also be subject to reality by way of a different and different embodiment It applies or applies, the various details in this specification can also be based on different viewpoints and application, in the spirit without departing from the disclosure Lower carry out various modifications or alterations.It should be noted that in the absence of conflict, the feature in following embodiment and embodiment can To be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are without creative efforts Every other embodiment obtained belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways. For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in schema are drawn System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields The skilled person will understand that the aspect can be practiced without these specific details.
The embodiment of the present disclosure provides a kind of fidelity image generation method.Fidelity image generation method provided in this embodiment can To be executed by a computing device, which can be implemented as software, or be embodied as the combination of software and hardware, the meter It calculates device and can integrate and be arranged in server, terminal device etc..
Referring to Fig. 1, a kind of fidelity image generation method that the embodiment of the present disclosure provides includes the following steps:
S101, acquisition include the multiple images of target object, can determine the target object based on described multiple images One or more continuous actions.
The movement of target object and expression are the scheme of the disclosure contents to be simulated and be predicted, as an example, Target object can be the people for being really able to carry out network casting, be also possible to other pairs with information spreading function As, such as TV announcer, news program announcer, the teacher to give lessons etc..
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or Multiple direct broadcast bands).At this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object Content is presided over, also can recorde target object for the casting record of one section of news.
To including multiple frame images in the video of target object acquisition, it can be chosen from the frame image of video and multiple include The image of one or more target object continuous actions constitutes image collection.It, can be pre- by being trained to the image collection Survey the movement and expression that different input contents are directed to simulated target object.
S102 obtains the shape of the texture maps and element-specific of specific region on the target object in described multiple images Shape constraints graph.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes Deng) modeled.
Specifically, obtaining the texture maps (for example, face texture) of the specific region of target object and spy in multiple images The key point (for example, the face such as eyes, mouth key point) of element is determined, to constitute target object texture maps and element-specific Shape constraining figure.
The texture of specific region can obtain in such a way that 3D is rebuild, and by taking face as an example, pass through the side of 3D human face rebuilding Formula obtains face three-dimensional grid, and the corresponding face pixel value of all three dimensional network lattice points constitutes target object (for example, main broadcaster) Face texture.Wherein, 3D human face rebuilding can be realized using existing technology.
The shape constraining figure of element-specific can be realized by the way of critical point detection, by taking eyes and mouth as an example, Eyes and mouth key point are obtained by existing face critical point detection algorithm.Crucial dot is separately connected around eyes/mouth At eyes/mouth enclosed region.Eye pupil area filling blue, eyes residue other parts filling white, mouth closure Area filling is red.The specific member of image construction after being filled color to the enclosed region that the key point of element-specific is formed The shape constraining figure of element.
S103, based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, building The reconstruction model of the target object.
It, can be in conjunction with the multiple figures for generating texture maps and shape constraining figure after obtaining texture maps, shape constraining figure The reconstruction model for target object is trained by the convolutional neural networks of setting and constructed to picture.
Convolutional neural networks structure may include several convolutional layers, pond layer, full articulamentum and classifier.The wherein volume The output layer of the last layer of product neural network structure is as the interstitial content of input layer, so as to directly export generation mesh Mark the video frame of object image.
During being trained to the convolutional neural networks, prediction error is measured with mean square error function, Difference i.e. between prediction output target object image frame and artificial acquisition target object image frame, for the difference, using anti- The difference is reduced to propagation function.
S104 generates the fidelity image to match with the input information of the target object, institute using the reconstruction model It states fidelity image and contains the one or more prediction actions to match with the input information.
After reconstruction model is provided with, mesh can be predicted using the reconstruction model by way of video cartoon Mark the various movements and expression of object in video.Specifically, can be generated by way of generating fidelity image comprising mesh The video file of object action and expression is marked, fidelity image can be used as the whole frame or key frame of video file, fidelity image Contain the set of the multiple images of the one or more prediction actions to match with the input information.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information It is converted into the parameter to match with the texture maps and the shape constraining figure after crossing data parsing, utilizes the institute obtained after training Reconstruction model is stated, is finally completed the generation for guaranteeing image by calling the texture maps and the shape constraining figure.
In the stage predicted, the shape of the specific region texture maps and element-specific that can provide target object is about Beam, utilizes trained reconstruction model, predicts the image information of two-dimentional main broadcaster's image, about by the shape of continuous element-specific Beam and the texture of fixed specific region predict continuous main broadcaster and broadcast image as input.
During realizing step S101, referring to fig. 2, according to a kind of specific implementation of the embodiment of the present disclosure, adopt Collection includes the multiple images of target object, be may include steps of:
S201 carries out video acquisition to the target object using picture pick-up device, obtains the video comprising multiple video frames File.
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or Multiple direct broadcast bands), at this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object Content is presided over, also can recorde target object for the casting record of one section of news.
S202, the video frame of selected part or whole from the video file form and include the more of the target object A image.
To including multiple frame images in the video of target object acquisition, it includes one or more for can choosing from video multiple The image of a target object continuous action constitutes image collection.It, can be with predictions and simulations by being trained to the image collection Target object is directed to the movement and expression of different input contents.
As another embodiment of step S101, referring to Fig. 3, according to a kind of specific implementation of the embodiment of the present disclosure Mode, the acquisition include the multiple images of target object, can also include step S301-S303:
S301, for the casting sample of target object setting different-style.
In order to more fully obtain the various movements and expression of target object, different types of broadcast can be preset Report sample.For example, casting sample may include the different moods such as happiness, sadness, indignation, to obtain more comprehensively training sample This.
S302 obtains the target object for the Sample video of different-style casting sample.
By carrying out video sampling to target object, available target object is for different-style casting sample Sample video.
S303 obtains the multiple images comprising target object from the Sample video.
According to the actual needs, multiple figures comprising target object can be chosen from multiple video frames in Sample video Picture, multiple image some or all of can be in Sample video video frame, can also choose from whole Sample videos Key frame is as multiple image.
During realizing step S102, according to a kind of specific implementation of the embodiment of the present disclosure, referring to fig. 4, The shape constraining figure that the texture maps and element-specific of specific region on the target object are obtained in described multiple images, can wrap It includes:
S401 carries out 3D reconstruction to the upper specific region of the target object, obtains 3D section object.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes Deng) modeled.
S402, obtains the three-dimensional grid of the 3D section object, and the three-dimensional grid includes preset coordinate value.
3D section object describes its specific position by three-dimensional grid, and specific coordinate is arranged to three-dimensional grid for this Value, such as three-dimensional grid can be described by way of setting planar two dimensional coordinate and spatial altitude coordinate.
S403 determines the texture maps of the specific region based on the pixel value on different three-dimensional grid coordinates.
Pixel value on different three-dimensional grid coordinates can be linked together, form grid plan, the grid plan shape At the texture maps of specific region.
By the embodiment of step S401-S403, the texture maps of specific region can be formed faster, improve texture Figure at efficiency.
During step S104 is executed, as a kind of specific embodiment, generated referring to Fig. 5 using reconstruction model The fidelity image to match with the input information of the target object, may comprise steps of:
S501 obtains the input information for being directed to the target object, parses to the input information, obtain the first solution Analyse result.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information It is converted into the first parsing result after crossing data parsing, which includes and the texture maps and the shape constraining figure The parameter to match, using the reconstruction model obtained after training, by calling the texture maps and the shape constraining figure To be finally completed the generation for guaranteeing image.
S502 carries out model quantization to first parsing result, obtains target object movement quantization vector.
It include the motion amplitude parameter for element-specific on target object in first parsing result, by taking mouth as an example, Motion amplitude can be quantified as 1 when mouth all opens, and when mouth is all closed, the amplitude of doing exercises can quantify It is 0, by the numerical value between quantization 0 and 1, intermediate state of the mouth between opening and being closed completely completely can be described.
S503 generates multiple fidelity images with the movement quantization Vectors matching.
Quantify vector by movement, the fortune of the element-specific on target object can be described in a manner of sequence fidelity image Dynamic amplitude, the fidelity figure of the special exercise element containing different motion amplitude is continuously stitched and fastened, and is just formd and is included The prediction result of target object difference movement.
It may include step S5031- specifically, generating multiple fidelity images with the movement quantization Vectors matching S5033:
S5031 is inputted the texture maps as the fixed of the fidelity image.
It, can be by texture since susceptibility of the texture maps for user is relatively low, therefore during forming fidelity image Scheme the fixed input of the target object as prediction, that is, in fidelity image, texture maps are remained unchanged.
S5032 determines the kinematic constraint value of the element-specific based on element value in movement quantization vector.
By movement quantization vector in element value, can describe the element-specific on target object in some fidelity Motion amplitude on image continuously stitches and fastens the fidelity figure of the special exercise element containing different motion amplitude, just Form the prediction result comprising the movement of target object difference
S5033 is predicted more with the input information matches by continuous kinematic constraint value and fixed texture maps A fidelity image.
With above method embodiment relative to referring to Fig. 6, the embodiment of the present disclosure also discloses a kind of fidelity image generation Device 60, comprising:
Acquisition module 601 can determine institute based on described multiple images for acquiring the multiple images comprising target object State one or more continuous actions of target object.
The movement of target object and expression are the scheme of the disclosure contents to be simulated and be predicted, as an example, Target object can be the people for being really able to carry out network casting, be also possible to other pairs with information spreading function As, such as TV announcer, news program announcer, the teacher to give lessons etc..
Target object is usually the people with communication function, since the people of the type usually has certain popularity, when When having the content of magnanimity that target object is needed to carry out the casting comprising voice and/or video actions, it usually needs spend more Cost.Meanwhile the program for class is broadcast live, target object can not usually appear at the same time multiple direct broadcasting rooms (or Multiple direct broadcast bands), at this time if it is desired to showing effect as such as " main broadcaster attends to anything else ", it is logical that on-the-spot report is carried out by true man Often it is difficult to reach this effect.
Video acquisition is carried out to target object (for example, main broadcaster) for this reason, it may be necessary to first pass through the video recording equipments such as video camera in advance, is led to Video is crossed to acquire target object for the casting of different content and record.For example it is possible to record one section of direct broadcasting room of target object Content is presided over, also can recorde target object for the casting record of one section of news.
To including multiple frame images in the video of target object acquisition, it includes one or more for can choosing from video multiple The image of a target object continuous action constitutes image collection.It, can be with predictions and simulations by being trained to the image collection Target object is directed to the movement and expression of different input contents.
Obtain module 602, for determined in described multiple images on the target object texture maps of specific region and The shape constraining figure of element-specific.
It, can be on selection target object after obtaining multiple images (for example, video frame) relevant to target object Composition object target object is modeled.In order to improve the efficiency of modeling, the resolution for user can choose Less high specific region (for example, face area) and the element-specific high for user discrimination degree are (for example, mouth, eyes Deng) modeled.
Specifically, obtaining the texture maps (for example, face texture) of the specific region of target object and spy in multiple images The key point (for example, the face such as eyes, mouth key point) of element is determined, to constitute target object texture maps and element-specific Shape constraining figure.
The texture of specific region can obtain in such a way that 3D is rebuild, and by taking face as an example, pass through the side of 3D human face rebuilding Formula obtains face three-dimensional grid, and the corresponding face pixel value of all three dimensional network lattice points constitutes target object (for example, main broadcaster) Face texture.Wherein, 3D human face rebuilding can be realized using existing technology.
The shape constraining figure of element-specific can be realized by the way of critical point detection, by taking eyes and mouth as an example, Eyes and mouth key point are obtained by existing face critical point detection algorithm.Crucial dot is separately connected around eyes/mouth At eyes/mouth enclosed region.Eye pupil area filling blue, eyes residue other parts filling white, mouth closure Area filling is red.The specific member of image construction after being filled color to the enclosed region that the key point of element-specific is formed The shape constraining figure of element.
Module 603 is constructed, for the X-Y scheme based on the texture maps, the shape constraining figure and described multiple images As information, the reconstruction model of the target object is constructed.
It, can be in conjunction with the multiple figures for generating texture maps and shape constraining figure after obtaining texture maps, shape constraining figure The reconstruction model for target object is trained by the convolutional neural networks of setting and constructed to picture.
Specifically, convolutional neural networks structure may include several convolutional layers, pond layer, full articulamentum and classifier. Wherein the output layer of the last layer of the convolutional neural networks structure is as the interstitial content of input layer, so as to directly defeated The video frame of target object image is generated out.
During being trained to the convolutional neural networks, prediction error is measured with mean square error function, Difference i.e. between prediction output target object image frame and artificial acquisition target object image frame, for the difference, using anti- The difference is reduced to propagation function.
Generation module 604, for the utilization reconstruction model, what the input information of generation and the target object matched Fidelity image, the fidelity image contain the one or more prediction actions to match with the input information.
After reconstruction model is provided with, mesh can be predicted using the reconstruction model by way of video cartoon Mark the various movements and expression of object in video.Specifically, can be generated by way of generating fidelity image comprising mesh The video file of object action and expression is marked, fidelity image can be used as the whole frame or key frame of video file, fidelity image Contain the set of the multiple images of the one or more prediction actions to match with the input information.
Input information can be various ways, for example, input information can be the form of text or audio.It is logical to input information It is converted into the parameter to match with the texture maps and the shape constraining figure after crossing data parsing, utilizes the institute obtained after training Reconstruction model is stated, is finally completed the generation for guaranteeing image by calling the texture maps and the shape constraining figure.
In the stage predicted, the shape of the specific region texture maps and element-specific that can provide target object is about Beam, utilizes trained reconstruction model, predicts the image information of two-dimentional main broadcaster's image, about by the shape of continuous element-specific Beam and the texture of fixed specific region predict continuous main broadcaster and broadcast image as input.
Fig. 6 shown device can it is corresponding execute above method embodiment in content, what the present embodiment was not described in detail Part, referring to the content recorded in above method embodiment, details are not described herein.
Referring to Fig. 7, the embodiment of the present disclosure additionally provides a kind of electronic equipment 70, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor It executes, so that at least one processor is able to carry out fidelity image generation method in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of non-transient computer readable storage medium, and the non-transient computer is readable to deposit Storage media stores computer instruction, and the computer instruction is for executing the computer in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of computer program product, and the computer program product is non-temporary including being stored in Calculation procedure on state computer readable storage medium, the computer program include program instruction, when the program instruction is calculated When machine executes, the computer is made to execute the fidelity image generation method in preceding method embodiment.
Below with reference to Fig. 7, it illustrates the structural schematic diagrams for the electronic equipment 70 for being suitable for being used to realize the embodiment of the present disclosure. Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, Digital Broadcasting Receiver Device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal are (such as vehicle-mounted Navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electronics shown in Fig. 7 Equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in fig. 7, electronic equipment 70 may include processing unit (such as central processing unit, graphics processor etc.) 701, It can be loaded into random access storage according to the program being stored in read-only memory (ROM) 702 or from storage device 708 Program in device (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with the behaviour of electronic equipment 70 Various programs and data needed for making.Processing unit 701, ROM 702 and RAM 703 are connected with each other by bus 704.It is defeated Enter/export (I/O) interface 705 and is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, figure As the input unit 706 of sensor, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaking The output device 707 of device, vibrator etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.It is logical T unit 709 can permit electronic equipment 70 and wirelessly or non-wirelessly be communicated with other equipment to exchange data.Although showing in figure The electronic equipment 70 with various devices is gone out, it should be understood that being not required for implementing or having all devices shown. It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708 It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity When sub- equipment executes, so that the electronic equipment: obtaining at least two internet protocol addresses;Send to Node evaluation equipment includes institute State the Node evaluation request of at least two internet protocol addresses, wherein the Node evaluation equipment is internet from described at least two In protocol address, chooses internet protocol address and return;Receive the internet protocol address that the Node evaluation equipment returns;Its In, the fringe node in acquired internet protocol address instruction content distributing network.
Alternatively, above-mentioned computer-readable medium carries one or more program, when said one or multiple programs When being executed by the electronic equipment, so that the electronic equipment: receiving the Node evaluation including at least two internet protocol addresses and request; From at least two internet protocol address, internet protocol address is chosen;Return to the internet protocol address selected;Wherein, The fringe node in internet protocol address instruction content distributing network received.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
It should be appreciated that each section of the disclosure can be realized with hardware, software, firmware or their combination.
The above, the only specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, it is any Those familiar with the art is in the technical scope that the disclosure discloses, and any changes or substitutions that can be easily thought of, all answers Cover within the protection scope of the disclosure.Therefore, the protection scope of the disclosure should be subject to the protection scope in claims.

Claims (12)

1. a kind of fidelity image generation method characterized by comprising
Acquisition includes the multiple images of target object, can determine one or more of the target object based on described multiple images A continuous action;
The shape constraining figure of the texture maps and element-specific of specific region on the target object is obtained in described multiple images;
Based on the two-dimensional image information of the texture maps, the shape constraining figure and described multiple images, the target is constructed The reconstruction model of object;
Using the reconstruction model, the fidelity image to match with the input information of the target object, the fidelity figure are generated As containing the one or more prediction actions to match with the input information.
2. the method according to claim 1, wherein the acquisition includes the multiple images of target object, comprising:
Video acquisition is carried out to the target object using picture pick-up device, obtains the video file comprising multiple video frames;
The video frame of selected part or whole from the video file forms the multiple images comprising the target object.
3. the method according to claim 1, wherein the acquisition includes the multiple images of target object, comprising:
For the casting sample of target object setting different-style;
The target object is obtained for the Sample video of different-style casting sample;
The multiple images comprising target object are obtained from the Sample video.
4. the method according to claim 1, wherein described obtain the target object in described multiple images The texture maps of upper specific region and the shape constraining figure of element-specific, comprising:
3D reconstruction is carried out to the upper specific region of the target object, obtains 3D section object;
The three-dimensional grid of the 3D section object is obtained, the three-dimensional grid includes preset coordinate value;
Based on the pixel value on different three-dimensional grid coordinates, the texture maps of the specific region are determined.
5. according to the method described in claim 4, it is characterized in that, described obtain the target object in described multiple images The texture maps of upper specific region and the shape constraining figure of element-specific, further includes:
The critical point detection for being directed to element-specific is executed in described multiple images, is obtained relevant to the element-specific multiple Key point;
The shape constraining figure for describing the element-specific is formed based on the multiple key point.
6. the method according to claim 1, wherein it is described based on the texture maps, the shape constraining figure with And the two-dimensional image information of described multiple images, construct the reconstruction model of the target object, comprising:
The convolutional neural networks of the training reconstruction model are set, include the target pair using convolutional neural networks training The image of elephant, wherein the input of the last layer of the convolutional neural networks is consistent with the input of the node of input layer.
7. according to the method described in claim 6, it is characterized in that, described trained using the convolutional neural networks comprising described The image of target object, comprising:
Prediction error is measured with mean square error function, the prediction error is used to describe to export vivid frame and artificial acquisition The difference of frame;
The prediction error is reduced using backpropagation function.
8. being generated and the target the method according to claim 1, wherein described utilize the reconstruction model The fidelity image that the input information of object matches, comprising:
The input information for being directed to the target object is obtained, the input information is parsed, the first parsing result is obtained;
Model quantization is carried out to first parsing result, obtains target object movement quantization vector;
Generate multiple fidelity images with the movement quantization Vectors matching.
9. according to the method described in claim 8, it is characterized in that, the generation quantifies the multiple of Vectors matching with the movement Fidelity image, comprising:
Using the texture maps as the fixed input of the fidelity image;
Based on element value in movement quantization vector, the kinematic constraint value of the element-specific is determined;
By continuous kinematic constraint value and fixed texture maps, multiple fidelity figures with the input information matches are predicted Picture.
10. a kind of fidelity image generating means characterized by comprising
Acquisition module can determine the target based on described multiple images for acquiring the multiple images comprising target object One or more continuous actions of object;
Module is obtained, for determining the texture maps and element-specific of specific region on the target object in described multiple images Shape constraining figure;
Module is constructed, for the two-dimensional image information based on the texture maps, the shape constraining figure and described multiple images, Construct the reconstruction model of the target object;
Generation module generates the fidelity figure to match with the input information of the target object for utilizing the reconstruction model Picture, the fidelity image contain the one or more prediction actions to match with the input information.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the generation of fidelity image described in aforementioned any claim 1-9 Method.
12. a kind of non-transient computer readable storage medium, which stores computer instruction, The computer instruction is for making the computer execute fidelity image generation method described in aforementioned any claim 1-9.
CN201910216551.4A 2019-03-21 2019-03-21 Fidelity image generation method and device and electronic equipment Active CN110035271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910216551.4A CN110035271B (en) 2019-03-21 2019-03-21 Fidelity image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910216551.4A CN110035271B (en) 2019-03-21 2019-03-21 Fidelity image generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110035271A true CN110035271A (en) 2019-07-19
CN110035271B CN110035271B (en) 2020-06-02

Family

ID=67236469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910216551.4A Active CN110035271B (en) 2019-03-21 2019-03-21 Fidelity image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110035271B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532891A (en) * 2019-08-05 2019-12-03 北京地平线机器人技术研发有限公司 Target object state identification method, device, medium and equipment
CN111294665A (en) * 2020-02-12 2020-06-16 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN111368137A (en) * 2020-02-12 2020-07-03 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271451A1 (en) * 2011-08-09 2013-10-17 Xiaofeng Tong Parameterized 3d face generation
US20140210830A1 (en) * 2013-01-29 2014-07-31 Kabushiki Kaisha Toshiba Computer generated head
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN106651978A (en) * 2016-10-10 2017-05-10 讯飞智元信息科技有限公司 Face image prediction method and system
CN107463888A (en) * 2017-07-21 2017-12-12 竹间智能科技(上海)有限公司 Face mood analysis method and system based on multi-task learning and deep learning
CN107977511A (en) * 2017-11-30 2018-05-01 浙江传媒学院 A kind of industrial design material high-fidelity real-time emulation algorithm based on deep learning
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN108280883A (en) * 2018-02-07 2018-07-13 北京市商汤科技开发有限公司 It deforms the generation of special efficacy program file packet and deforms special efficacy generation method and device
CN108961369A (en) * 2018-07-11 2018-12-07 厦门幻世网络科技有限公司 The method and apparatus for generating 3D animation
CN109255830A (en) * 2018-08-31 2019-01-22 百度在线网络技术(北京)有限公司 Three-dimensional facial reconstruction method and device
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271451A1 (en) * 2011-08-09 2013-10-17 Xiaofeng Tong Parameterized 3d face generation
US20140210830A1 (en) * 2013-01-29 2014-07-31 Kabushiki Kaisha Toshiba Computer generated head
CN106651978A (en) * 2016-10-10 2017-05-10 讯飞智元信息科技有限公司 Face image prediction method and system
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN107463888A (en) * 2017-07-21 2017-12-12 竹间智能科技(上海)有限公司 Face mood analysis method and system based on multi-task learning and deep learning
CN107977511A (en) * 2017-11-30 2018-05-01 浙江传媒学院 A kind of industrial design material high-fidelity real-time emulation algorithm based on deep learning
CN108280883A (en) * 2018-02-07 2018-07-13 北京市商汤科技开发有限公司 It deforms the generation of special efficacy program file packet and deforms special efficacy generation method and device
CN108961369A (en) * 2018-07-11 2018-12-07 厦门幻世网络科技有限公司 The method and apparatus for generating 3D animation
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109255830A (en) * 2018-08-31 2019-01-22 百度在线网络技术(北京)有限公司 Three-dimensional facial reconstruction method and device
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532891A (en) * 2019-08-05 2019-12-03 北京地平线机器人技术研发有限公司 Target object state identification method, device, medium and equipment
CN110532891B (en) * 2019-08-05 2022-04-05 北京地平线机器人技术研发有限公司 Target object state identification method, device, medium and equipment
CN111294665A (en) * 2020-02-12 2020-06-16 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN111368137A (en) * 2020-02-12 2020-07-03 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device
CN114125492B (en) * 2022-01-24 2022-07-15 阿里巴巴(中国)有限公司 Live content generation method and device

Also Published As

Publication number Publication date
CN110035271B (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN110035271A (en) Fidelity image generation method, device and electronic equipment
CN110047121A (en) Animation producing method, device and electronic equipment end to end
CN110227266B (en) Building virtual reality game play environments using real world virtual reality maps
CN110047119A (en) Animation producing method, device and electronic equipment comprising dynamic background
CN110189394A (en) Shape of the mouth as one speaks generation method, device and electronic equipment
GB2491819A (en) Server for remote viewing and interaction with a virtual 3-D scene
CN110189246A (en) Image stylization generation method, device and electronic equipment
CN110222726A (en) Image processing method, device and electronic equipment
CN110062176A (en) Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110267097A (en) Video pushing method, device and electronic equipment based on characteristic of division
Pérez et al. Emerging immersive communication systems: overview, taxonomy, and good practices for QoE assessment
CN107343206A (en) Support video generation method, device, medium and the electronic equipment of various visual angles viewing
CN110287891A (en) Gestural control method, device and electronic equipment based on human body key point
CN109982130A (en) A kind of video capture method, apparatus, electronic equipment and storage medium
CN110177295A (en) Processing method, device and the electronic equipment that subtitle crosses the border
CN110225400A (en) A kind of motion capture method, device, mobile terminal and storage medium
CN110288037A (en) Image processing method, device and electronic equipment
Dupont et al. Exploring the appropriateness of different immersive environments in the context of an innovation process for smart-cities
Kaushik et al. A comprehensive analysis of mixed reality visual displays in context of its applicability in IoT
CN110288532B (en) Method, apparatus, device and computer readable storage medium for generating whole body image
CN110060324A (en) Image rendering method, device and electronic equipment
CN110069997A (en) Scene classification method, device and electronic equipment
Soliman et al. Artificial intelligence powered Metaverse: analysis, challenges and future perspectives
Aljumaie et al. Potential Applications and Benefits of Metaverse
CN110197459A (en) Image stylization generation method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.