CN109120980B - Special effect adding method for recommendation video and related product - Google Patents

Special effect adding method for recommendation video and related product Download PDF

Info

Publication number
CN109120980B
CN109120980B CN201810982687.1A CN201810982687A CN109120980B CN 109120980 B CN109120980 B CN 109120980B CN 201810982687 A CN201810982687 A CN 201810982687A CN 109120980 B CN109120980 B CN 109120980B
Authority
CN
China
Prior art keywords
video
sub
special effect
added
recommendation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810982687.1A
Other languages
Chinese (zh)
Other versions
CN109120980A (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qingmu Culture Communication Co.,Ltd.
Original Assignee
Shenzhen Qingmu Culture Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qingmu Culture Communication Co ltd filed Critical Shenzhen Qingmu Culture Communication Co ltd
Priority to CN201810982687.1A priority Critical patent/CN109120980B/en
Publication of CN109120980A publication Critical patent/CN109120980A/en
Application granted granted Critical
Publication of CN109120980B publication Critical patent/CN109120980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure provides a special effect adding method for promoting video and a related product, wherein the method comprises the following steps: acquiring a recommendation video and a special effect to be added; determining the duration of a special effect to be added, and extracting a sub-video matched with the duration from a recommendation video; and adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video. The technical scheme provided by the application improves the advantage of promoting video quality.

Description

Special effect adding method for recommendation video and related product
Technical Field
The invention relates to the technical field of culture media, in particular to a special effect adding method of a recommendation video and a related product.
Background
Enterprises are the main bodies of social business operation, and many enterprises need some recommendations, so recommendation videos come with the advent of the enterprises, and the recommendation videos also become enterprise promotion videos, which are produced by professional movie companies.
The existing recommendation video cannot add special effects, and the quality of the recommendation video is influenced.
Disclosure of Invention
The embodiment of the invention provides a special effect adding method of a recommendation video and a related product, which can realize the increase of the special effect and have the advantage of improving the quality of the recommendation video.
In a first aspect, an embodiment of the present invention provides a method for adding a special effect to a recommendation video, where the method includes the following steps:
acquiring a recommendation video and a special effect to be added;
determining the duration of a special effect to be added, and extracting a sub-video matched with the duration from a recommendation video;
and adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video.
Optionally, the extracting the sub-video matched with the duration from the recommendation video specifically includes
Determining a character area and a background area from the recommendation video, determining continuous sub-videos of which the background areas are larger than a special effect area, extracting sub-videos of which the time length is larger than the special effect time length from the continuous sub-videos, and determining the sub-videos to be matched with the time length.
Optionally, the adding the special effect to the sub-video to obtain the added sub-video specifically includes:
and determining the area of the special effect, and in the background area of the sub-video, transparency is carried out on the area matched with the area, and then the special effect and the sub-video with the transparency in partial area are superposed to obtain the added sub-video.
Optionally, the method further includes:
and saving the recommendation video added with the special effect.
In a second aspect, a terminal is provided, which includes: a processor, a communication unit and a display screen,
the communication unit is used for acquiring a recommendation video and a special effect to be added;
the processor is used for determining the duration of the special effect to be added and extracting the sub-video matched with the duration from the recommendation video; and adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video.
Optionally, the processor is specifically configured to determine a person region and a background region from the recommendation video, determine continuous sub-videos in which the background region is larger than the special effect region, extract sub-videos in which a time length is larger than a time length of the special effect from the continuous sub-videos, and determine that the sub-videos are matched with the time length.
Optionally, the processor is specifically configured to determine an area of the special effect, and then superimpose the special effect and the sub-video with a part of the area being transparent in a region of the background of the sub-video, where the area is matched with the transparent area, to obtain the added sub-video.
Optionally, the processor is further configured to save the recommendation video to which the special effect is added.
Optionally, the terminal is: a tablet computer or a personal computer.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, after the recommendation video and the added special effect are determined, the adding time of the special effect is determined, then the sub-video matched with the adding time is selected, and the special effect is added into the sub-video, so that the special effect of the recommendation video is added, and the video quality is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flowchart illustrating a method for promoting special effect addition of video.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal, and as shown in fig. 1, the terminal may include an intelligent terminal, and specifically may be a tablet computer, such as an Android tablet computer, an iOS tablet computer, a Windows Phone tablet computer, and the like. Specifically, the terminal may further include: personal computer, server, etc., the terminal comprising: processor 101, display screen 105, communication module 102, memory 103 and image processor 104.
The processor 101 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, thereby monitoring or controlling the terminal as a whole. Alternatively, processor 101 may include one or more processing units; optionally, the processor 101 may integrate an application processor, a modem processor, and an artificial intelligence chip, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like.
Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The communication module can be used for receiving and sending information. Typically, the communication module includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the communication module can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, such as a mobile communication protocol or a short-range communication protocol (including but not limited to bluetooth, WIFI, etc.).
The image processor 104 may be specifically configured to perform relevant processing on an image (e.g., a video), and in practical applications, the image processor 104 may be integrated into the processor 101.
The display screen may be used to display advertisements, and may specifically be an LCD display screen, but may also be other forms of display screens, such as a touch display screen.
Referring to fig. 2, fig. 2 provides a method for promoting special effects addition of video, which is shown in fig. 2 and is executed by the terminal shown in fig. 1, and the method includes the following steps:
step S201, acquiring a recommendation video and a special effect to be added;
step S202, determining the duration of a special effect to be added, and extracting a sub-video matched with the duration from a recommendation video;
step S203, adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video.
According to the technical scheme, after the recommendation video and the added special effect are determined, the adding time of the special effect is determined, then the sub-video matched with the adding time is selected, and the special effect is added into the sub-video, so that the special effect of the recommendation video is added.
The adding the special effect to the sub-video to obtain the added sub-video may specifically include:
and determining the area of the special effect, and in the background area of the sub-video, transparency is carried out on the area matched with the area, and then the special effect and the sub-video with the transparency in partial area are superposed to obtain the added sub-video.
The extracting of the sub-video matched with the duration from the recommendation video may specifically include:
determining a character area and a background area from the recommendation video, determining continuous sub-videos of which the background areas are larger than a special effect area, extracting sub-videos of which the time length is larger than the special effect time length from the continuous sub-videos, and determining the sub-videos to be matched with the time length.
Optionally, the method may further include:
and saving the recommendation video added with the special effect.
The method for confirming the human figure region may specifically include:
executing the video of the previous visual angle of the converted frame to determine the human face range of the human face by a human face recognition algorithm, setting 1 chest region (rectangle), left hand region (rectangle), right hand region (rectangle) and two leg regions (rectangle) by taking the human face range as a reference, extracting the RGB value of each pixel point in 1 chest region, counting the number of the same RGB values, determining the first RGB value with the largest number, connecting the adjacent pixel points in the first RGB value to obtain a first pixel frame, if the first pixel frame is closed, determining that the area in the first pixel frame is the trimmed chest area, and if the first pixel frame is not continuous, determining the distance of the broken line segments of the non-pixel frame, and if the distance of the broken line segments is less than a set threshold value and the RGB value of each broken line segment is the same, connecting the broken line segments by using a straight line to obtain a closed second pixel frame, wherein the second pixel frame is the trimmed chest area.
For the 3 four-limb areas, the left-hand area, the right-hand area and the two-leg area are divided, and for the two-leg area, the two-leg area after pruning can be obtained by adopting a pruning method of a chest area;
the left-hand area pruning method specifically comprises the following steps:
extracting the RGB value of each pixel point in 1 left-hand area, counting the number of the same RGB values, determining the first RGB value with the largest number and the second RGB value with the largest number, connecting the adjacent pixel points in the first RGB value to obtain a first pixel frame, connecting the adjacent pixel points in the second RGB value to obtain a second pixel frame, and determining that the first pixel frame and the second pixel frame are the left-hand area after trimming if the first pixel frame and the second pixel frame are both closed and the first pixel frame is connected with the second pixel frame. And obtaining the trimmed right hand area in the same way, and combining the face area, the 3 limb areas and the 1 chest area to obtain the character range.
The determination of the scope of the hero is only the confirmation of the approximate scope, because the recommendation video only needs to determine the approximate scope of the hero when being cut, and because the refinement determination is not necessary, because the scene and the person corresponding to the film source are the same, the determination of the scope can be directly processed.
Referring to fig. 3, fig. 3 provides a terminal including: a processor 301, a communication unit 302 and a display 303,
the communication unit is used for acquiring a recommendation video and a special effect to be added;
the processor is used for determining the duration of the special effect to be added and extracting the sub-video matched with the duration from the recommendation video; and adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video.
An embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute part or all of the steps of any one of the special effect adding methods for recommending videos as described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute some or all of the steps of any one of the special effects adding methods of recommendation video described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method for adding special effects to a recommended video, the method comprising the steps of:
acquiring a recommendation video and a special effect to be added;
determining the duration of a special effect to be added, and extracting a sub-video matched with the duration from a recommendation video;
adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video; the extracting of the sub-video matched with the duration from the recommendation video specifically includes determining a character region and a background region from the recommendation video, determining a continuous sub-video with the background region larger than the special effect region, extracting the sub-video with the time length larger than the duration of the special effect from the continuous sub-video, and determining the sub-video matched with the duration.
2. The method of claim 1, wherein the adding the special effect to the sub-video to obtain the added sub-video specifically comprises:
and determining the area of the special effect, and in the background area of the sub-video, transparency is carried out on the area matched with the area, and then the special effect and the sub-video with the transparency in partial area are superposed to obtain the added sub-video.
3. The method of claim 1, further comprising:
and saving the recommendation video added with the special effect.
4. A terminal, the terminal comprising: a processor, a communication unit and a display screen, characterized in that,
the communication unit is used for acquiring a recommendation video and a special effect to be added;
the processor is used for determining the duration of the special effect to be added and extracting the sub-video matched with the duration from the recommendation video; adding the special effect to the sub-video to obtain an added sub-video, and putting the added sub-video into a recommendation video;
the processor is specifically configured to determine a person region and a background region from the recommendation video, determine continuous sub-videos of which the background regions are larger than the special effect region, extract sub-videos of which the time lengths are larger than the time length of the special effect from the continuous sub-videos, and determine the sub-videos as sub-videos matched with the time length.
5. The terminal of claim 4,
the processor is specifically configured to determine an area of the special effect, and then superimpose the special effect and the sub-video with a part of the area being transparent in a region where the background region of the sub-video is transparent and matches the area to obtain an added sub-video.
6. The terminal of claim 4,
the processor is further configured to save the recommendation video with the special effect added.
7. A terminal according to any of claims 4-6,
the terminal is as follows: a tablet computer or a personal computer.
8. A computer-readable storage medium storing a program, wherein the program causes a terminal to perform the method provided in any one of claims 1 to 3.
CN201810982687.1A 2018-08-27 2018-08-27 Special effect adding method for recommendation video and related product Active CN109120980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810982687.1A CN109120980B (en) 2018-08-27 2018-08-27 Special effect adding method for recommendation video and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810982687.1A CN109120980B (en) 2018-08-27 2018-08-27 Special effect adding method for recommendation video and related product

Publications (2)

Publication Number Publication Date
CN109120980A CN109120980A (en) 2019-01-01
CN109120980B true CN109120980B (en) 2021-04-06

Family

ID=64861052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810982687.1A Active CN109120980B (en) 2018-08-27 2018-08-27 Special effect adding method for recommendation video and related product

Country Status (1)

Country Link
CN (1) CN109120980B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN104581222A (en) * 2015-01-05 2015-04-29 李伟贤 Method for quickly recognizing advertisement position and implanting advertisement in video
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN108073669A (en) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 Business object methods of exhibiting, device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9712570B1 (en) * 2016-09-28 2017-07-18 Atlassian Pty Ltd Dynamic adaptation to increased SFU load by disabling video streams

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN104581222A (en) * 2015-01-05 2015-04-29 李伟贤 Method for quickly recognizing advertisement position and implanting advertisement in video
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN108073669A (en) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 Business object methods of exhibiting, device and electronic equipment

Also Published As

Publication number Publication date
CN109120980A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
KR102329862B1 (en) Method and electronic device for converting color of image
US9952858B2 (en) Computer readable storage media and methods for invoking an action directly from a scanned code
US10181203B2 (en) Method for processing image data and apparatus for the same
CN107330858B (en) Picture processing method and device, electronic equipment and storage medium
CN106327224A (en) Product tracing information collecting and entering system and method based on intelligent mobile terminal
CN104102409A (en) Scenario adaptation device and method for user interface
CN105095721A (en) Fingerprint authentication display device and method
CN104796487A (en) Social interaction method and related equipment
CN109241437A (en) A kind of generation method, advertisement recognition method and the system of advertisement identification model
CN111432274A (en) Video processing method and device
CN109167939B (en) Automatic text collocation method and device and computer storage medium
CN109120980B (en) Special effect adding method for recommendation video and related product
US9898799B2 (en) Method for image processing and electronic device supporting thereof
CN108924518B (en) Method for synthesizing in recommendation video and related products
CN109087249B (en) Blurring method of recommendation video and related products
CN108932704B (en) Picture processing method, picture processing device and terminal equipment
CN109151339B (en) Method for synthesizing characters in recommendation video and related products
CN113946456A (en) Information sharing method and information sharing device
CN109361956B (en) Time-based video cropping methods and related products
CN108898081B (en) Picture processing method and device, mobile terminal and computer readable storage medium
CN107633180B (en) Data query method and system of public security system
CN109658327B (en) Self-photographing video hair style generation method and related product
CN109547850B (en) Video shooting error correction method and related product
CN108989839A (en) The leading role's selection method and Related product of promotion video
CN109712066A (en) From animal head ear adding method and the Related product of shooting the video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210319

Address after: 201 Jujian industrial building, 1141 Nanshan Avenue, Beitou community, Nanshan street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Qingmu Culture Communication Co.,Ltd.

Address before: 518003 4K, building B, jinshanghua, No.45, Jinlian Road, Huangbei street, Luohu District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN YIDA CULTURE MEDIA Co.,Ltd.

GR01 Patent grant
GR01 Patent grant