CN114007084A - Video clip cloud storage method and device - Google Patents
Video clip cloud storage method and device Download PDFInfo
- Publication number
- CN114007084A CN114007084A CN202210001240.8A CN202210001240A CN114007084A CN 114007084 A CN114007084 A CN 114007084A CN 202210001240 A CN202210001240 A CN 202210001240A CN 114007084 A CN114007084 A CN 114007084A
- Authority
- CN
- China
- Prior art keywords
- video
- real
- cloud storage
- preprocessed
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 230000000694 effects Effects 0.000 claims abstract description 13
- 238000012163 sequencing technique Methods 0.000 claims abstract description 13
- 230000008921 facial expression Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 abstract description 7
- 238000007781 pre-processing Methods 0.000 description 4
- 210000000697 sensory organ Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000624 ear auricle Anatomy 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 238000013077 scoring method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2181—Source of audio or video content, e.g. local disk arrays comprising remotely distributed storage units, e.g. when movies are replicated over a plurality of video servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440245—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Image Analysis (AREA)
Abstract
The application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced. The application also provides a video clip cloud storage device.
Description
Technical Field
The application provides a cloud storage technology, and particularly relates to a video clip cloud storage method. The application also relates to a video clip cloud storage device.
Background
With the development of internet technology, the space of cloud storage is large, and the characteristic of data security leads to that cloud storage becomes a common data storage mode more and more.
In the prior art, a method for real-time video cloud-up is to transmit real-time videos of all camera devices to a cloud end in a network mode, and process the videos at the cloud end.
Disclosure of Invention
In order to solve the problem that real-time video cloud is unstable due to network fluctuation in the prior art, the application provides a video clip cloud storage method and a video clip cloud storage device.
The application provides a video clip cloud storage method, which comprises the following steps:
acquiring a real-time video stream in a local area network;
the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video;
and sending the brocade video to a cloud server for storage.
Optionally, the preset clipping rule includes:
scoring the human face image of each frame of the preprocessed video;
segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold;
and sequencing the video clips according to the scores.
Optionally, the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
Optionally, the human body action amplitude and the expression action amplitude are obtained by comparing the human body face image with a standard human body face image.
Optionally, the acquiring the real-time video stream in the local area network includes: and taking over the camera equipment in the local area network, and acquiring the real-time video stream through the camera equipment.
The present application further provides a video clip cloud storage device, including:
the acquisition module is used for acquiring real-time video stream in the local area network;
the editing module is used for segmenting the real-time video stream according to a preset duration threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset editing rule to generate a brocade video;
and the sending module is used for sending the brocade video to a cloud server for storage.
Optionally, the clipping module further comprises:
the scoring unit is used for scoring the human face image of each frame of the preprocessed video;
the processing unit is used for segmenting the preprocessed video according to the score to generate a video segment, and discarding the video segment with the score lower than a preset score threshold;
and the sequencing unit is used for sequencing the video clips according to the scores.
Optionally, the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
Optionally, the human body action amplitude and the expression action amplitude are obtained by comparing the human body face image with a standard human body face image.
Optionally, the obtaining module further includes:
and the taking-over unit is used for taking over the camera equipment in the local area network and acquiring the real-time video stream through the camera equipment.
The application has the advantages over the prior art that:
the application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced.
Drawings
Fig. 1 is a flow chart of a video clip cloud storage in the present application.
FIG. 2 is a flow diagram of the pre-processing video clip rules in the present application.
Fig. 3 is a schematic view of a video clip cloud storage device in the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced.
Fig. 1 is a flow chart of a video clip cloud storage in the present application.
Referring to fig. 1, S101 obtains a real-time video stream in a local area network.
One or more image pickup devices are connected to the local area network, and images picked up by the image pickup devices can be uploaded to the local area network and transmitted to terminal equipment or storage equipment through the local area network. The storage device in this application refers to a cloud server, and the cloud server receives image data of the image pickup device in real time.
The cloud server is provided with an edge computing box, and the edge computing box can be arranged outside the cloud server and provides edge computing service.
In the application, the edge computing box is connected with the camera equipment through a local area network and is simultaneously connected with point cloud service. And when the camera equipment starts to shoot real-time video images, the edge computing box replaces the cloud server to communicate with the camera equipment, and acquires real-time videos of the camera equipment.
Referring to fig. 1, in S102, the real-time video stream is segmented according to a preset duration threshold to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sorting, segment discarding, special effect adding, and music adding according to a preset clipping rule to generate a brocade video.
And in the process of receiving the real-time video, the edge computing box segments the real-time video according to a preset time length, and the segmented real-time video is used as a preprocessing video. Preferably, the preset time period is 10 minutes.
Specifically, the edge computing box has a storage unit, and when the preprocessed video is cut according to the preset time length, the preprocessed video is backed up and stored, and when the preprocessed video is lost due to network fluctuation in the network transmission process, the backup is retransmitted.
Before the pre-processed video is transmitted to the cloud server, the pre-processed video is edited to further reduce the transmission data volume, and preferably, the clipping is performed according to a preset clipping rule.
FIG. 2 is a flow diagram of the pre-processing video clip rules in the present application.
Referring to fig. 2, S201 scores human face images of each frame of the preprocessed video; s202, segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold value.
Each frame of the preprocessed video is cut out, and then the face image in each frame is scored. When there is no image of the human body in the frame, or the image of the human body is incomplete or unclear, this reduces the frame score to 0 and discards it.
And the scoring is carried out according to the human body action amplitude and the human face expression action amplitude of the human face image. Specifically, scoring the human face in one frame, firstly extracting pixels of the whole human image, then editing pixel blocks of five sense organs and four limbs respectively, and then calculating wallpaper of the pixel blocks and the pixels of the human image to obtain the score, wherein the score calculation formula is as follows:
wherein P is a score, A, B, C is a scale factor, the scale factor is manually set, and the method further comprisesIs the ratio of the pixel blocks of the five sense organs to the pixel blocks of the human face,、is a corresponding block of pixels;is the ratio of the pixel blocks of the four limbs to the pixel blocks of the human body,、is a corresponding pixel block,Is a ratio of selected pixel block distances, such as a ratio of an canthus distance to an earlobe distance,、is the corresponding block of pixels.
S203, sorting the video clips according to the scores.
As mentioned above, the video frames will drop frames that do not meet the condition, which results in video incoherency, and then the video frames are sorted according to the scores. Specifically, video frames with scores in a coherent state are first combined into small segments of a video, and then sorted according to the rating scores in the small segments. The consistency state refers to that the difference between the scores of two associated frames is smaller than a preset score threshold, and in the application, the preset score threshold is manually set.
And after the ordering is finished, adding sound effects and special effects according to the duration of the small segments respectively to finish the brocade video. Alternatively, the person skilled in the art may also perform the scoring of the human body or the human face by other scoring methods.
Referring to fig. 1, in S103, the brocade video is sent to a cloud server for storage.
After the brocade video is finished, packaging the brocade video and sending the brocade video to cloud service for storage, extracting the brocade video from the cloud server by a later editing person, and finally editing to obtain a final video. Because the video has automatically completed the first round of editing, the editing time of post editing personnel can be greatly saved.
The application also provides a video clip cloud storage device, which comprises an acquisition module 301, a clipping module 302 and a sending module 303.
Fig. 3 is a schematic view of a video clip cloud storage device in the present application.
Referring to fig. 3, the obtaining module 301 is configured to obtain a real-time video stream in a local area network.
One or more image pickup devices are connected to the local area network, and images picked up by the image pickup devices can be uploaded to the local area network and transmitted to terminal equipment or storage equipment through the local area network. The storage device in this application refers to a cloud server, and the cloud server receives image data of the image pickup device in real time.
The cloud server is provided with an edge computing box, and the edge computing box can be arranged outside the cloud server and provides edge computing service.
In this application, the obtaining module 301 further includes: and the taking-over unit is used for taking over the camera equipment in the local area network and acquiring the real-time video stream through the camera equipment. And when the camera equipment starts to shoot real-time video images, the edge computing box replaces the cloud server to communicate with the camera equipment, and acquires real-time videos of the camera equipment.
Referring to fig. 3, the clipping module 302 is configured to clip the real-time video stream according to a preset duration threshold to form a preprocessed video, and the preprocessed video performs video segmentation, segment ordering, segment discarding, special effect adding, and music adding according to a preset clipping rule to generate a brocade video.
And in the process of receiving the real-time video, the edge computing box segments the real-time video according to a preset time length, and the segmented real-time video is used as a preprocessing video. Preferably, the preset time period is 10 minutes.
Specifically, the edge computing box has a storage unit, and when the preprocessed video is cut according to the preset time length, the preprocessed video is backed up and stored, and when the preprocessed video is lost due to network fluctuation in the network transmission process, the backup is retransmitted.
Before the pre-processed video is transmitted to the cloud server, the pre-processed video is edited to further reduce the transmission data volume, and preferably, the clipping is performed according to a preset clipping rule.
Referring to fig. 2, S201 scores human face images of each frame of the preprocessed video; s202, segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold value.
Each frame of the preprocessed video is cut out, and then the face image in each frame is scored. When there is no image of the human body in the frame, or the image of the human body is incomplete or unclear, this reduces the frame score to 0 and discards it.
And the scoring is carried out according to the human body action amplitude and the human face expression action amplitude of the human face image. Specifically, scoring the human face in one frame, firstly extracting pixels of the whole human image, then editing pixel blocks of five sense organs and four limbs respectively, and then calculating wallpaper of the pixel blocks and the pixels of the human image to obtain the score, wherein the score calculation formula is as follows:
wherein P is a score, A, B, C is a scale factor, the scale factor is manually set, and the method further comprisesIs the ratio of the pixel blocks of the five sense organs to the pixel blocks of the human face,、is a corresponding block of pixels;is the ratio of the pixel blocks of the four limbs to the pixel blocks of the human body,、is a corresponding pixel block,Is a ratio of selected pixel block distances, such as a ratio of an canthus distance to an earlobe distance,、is the corresponding block of pixels.
S203, sorting the video clips according to the scores.
As mentioned above, the video frames will drop frames that do not meet the condition, which results in video incoherency, and then the video frames are sorted according to the scores. Specifically, video frames with scores in a coherent state are first combined into small segments of a video, and then sorted according to the rating scores in the small segments. The consistency state refers to that the difference between the scores of two associated frames is smaller than a preset score threshold, and in the application, the preset score threshold is manually set.
And after the ordering is finished, adding sound effects and special effects according to the duration of the small segments respectively to finish the brocade video. Alternatively, the person skilled in the art may also perform the scoring of the human body or the human face by other scoring methods.
Referring to fig. 3, the sending module 303 is configured to send the brocade video to a cloud server for storage.
After the brocade video is finished, packaging the brocade video and sending the brocade video to cloud service for storage, extracting the brocade video from the cloud server by a later editing person, and finally editing to obtain a final video. Because the video has automatically completed the first round of editing, the editing time of post editing personnel can be greatly saved.
Claims (10)
1. A video clip cloud storage method is characterized by comprising the following steps:
acquiring a real-time video stream in a local area network;
the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video;
and sending the brocade video to a cloud server for storage.
2. The video clip cloud storage method according to claim 1, wherein the preset clipping rule comprises:
scoring the human face image of each frame of the preprocessed video;
segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold;
and sequencing the video clips according to the scores.
3. The video clip cloud storage method according to claim 2, wherein the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
4. The video clip cloud storage method according to claim 3, wherein the human body motion amplitude and the expression motion amplitude are obtained by comparing the human body face image with a standard human body face image.
5. The video clip cloud storage method according to claim 1, wherein the acquiring a real-time video stream in a local area network comprises: and taking over the camera equipment in the local area network, and acquiring the real-time video stream through the camera equipment.
6. A video clip cloud storage device, comprising:
the acquisition module is used for acquiring real-time video stream in the local area network;
the editing module is used for segmenting the real-time video stream according to a preset duration threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset editing rule to generate a brocade video;
and the sending module is used for sending the brocade video to a cloud server for storage.
7. The video clip cloud storage device of claim 6, wherein said clipping module further comprises:
the scoring unit is used for scoring the human face image of each frame of the preprocessed video;
the processing unit is used for segmenting the preprocessed video according to the score to generate a video segment, and discarding the video segment with the score lower than a preset score threshold;
and the sequencing unit is used for sequencing the video clips according to the scores.
8. The video clip cloud storage device of claim 7, wherein said scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
9. The video clip cloud storage device of claim 8, wherein said human motion magnitudes and expression motion magnitudes are obtained by comparing said human face image with a standard human face image.
10. The video clip cloud storage device of claim 6, wherein said obtaining module further comprises:
and the taking-over unit is used for taking over the camera equipment in the local area network and acquiring the real-time video stream through the camera equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210001240.8A CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210001240.8A CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114007084A true CN114007084A (en) | 2022-02-01 |
CN114007084B CN114007084B (en) | 2022-09-09 |
Family
ID=79932584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210001240.8A Active CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114007084B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070212023A1 (en) * | 2005-12-13 | 2007-09-13 | Honeywell International Inc. | Video filtering system |
CN109121021A (en) * | 2018-09-28 | 2019-01-01 | 北京周同科技有限公司 | A kind of generation method of Video Roundup, device, electronic equipment and storage medium |
CN109862388A (en) * | 2019-04-02 | 2019-06-07 | 网宿科技股份有限公司 | Generation method, device, server and the storage medium of the live video collection of choice specimens |
CN109982109A (en) * | 2019-04-03 | 2019-07-05 | 睿魔智能科技(深圳)有限公司 | The generation method and device of short-sighted frequency, server and storage medium |
CN110401873A (en) * | 2019-06-17 | 2019-11-01 | 北京奇艺世纪科技有限公司 | Video clipping method, device, electronic equipment and computer-readable medium |
CN112347941A (en) * | 2020-11-09 | 2021-02-09 | 南京紫金体育产业股份有限公司 | Motion video collection intelligent generation and distribution method based on 5G MEC |
CN112445935A (en) * | 2020-11-25 | 2021-03-05 | 开望(杭州)科技有限公司 | Automatic generation method of video selection collection based on content analysis |
CN113676671A (en) * | 2021-09-27 | 2021-11-19 | 北京达佳互联信息技术有限公司 | Video editing method and device, electronic equipment and storage medium |
-
2022
- 2022-01-04 CN CN202210001240.8A patent/CN114007084B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070212023A1 (en) * | 2005-12-13 | 2007-09-13 | Honeywell International Inc. | Video filtering system |
CN109121021A (en) * | 2018-09-28 | 2019-01-01 | 北京周同科技有限公司 | A kind of generation method of Video Roundup, device, electronic equipment and storage medium |
CN109862388A (en) * | 2019-04-02 | 2019-06-07 | 网宿科技股份有限公司 | Generation method, device, server and the storage medium of the live video collection of choice specimens |
CN109982109A (en) * | 2019-04-03 | 2019-07-05 | 睿魔智能科技(深圳)有限公司 | The generation method and device of short-sighted frequency, server and storage medium |
CN110401873A (en) * | 2019-06-17 | 2019-11-01 | 北京奇艺世纪科技有限公司 | Video clipping method, device, electronic equipment and computer-readable medium |
CN112347941A (en) * | 2020-11-09 | 2021-02-09 | 南京紫金体育产业股份有限公司 | Motion video collection intelligent generation and distribution method based on 5G MEC |
CN112445935A (en) * | 2020-11-25 | 2021-03-05 | 开望(杭州)科技有限公司 | Automatic generation method of video selection collection based on content analysis |
CN113676671A (en) * | 2021-09-27 | 2021-11-19 | 北京达佳互联信息技术有限公司 | Video editing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114007084B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102082816B1 (en) | Method for improving the resolution of streaming files | |
CN109145784B (en) | Method and apparatus for processing video | |
EP3579188A3 (en) | Method, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene | |
CN107133590B (en) | A kind of identification system based on facial image | |
CN103369289A (en) | Communication method of video simulation image and device | |
CN110677718B (en) | Video identification method and device | |
EP3823267A1 (en) | Static video recognition | |
CN111985281A (en) | Image generation model generation method and device and image generation method and device | |
CN110234015A (en) | Live broadcast control method and device, storage medium and terminal | |
CN108647613B (en) | Examinee examination method applied to examination room | |
CN114007084B (en) | Video clip cloud storage method and device | |
CN113593587B (en) | Voice separation method and device, storage medium and electronic device | |
CN113610731B (en) | Method, apparatus and computer program product for generating image quality improvement model | |
CN113920023A (en) | Image processing method and device, computer readable medium and electronic device | |
CN113709401A (en) | Video call method, device, storage medium, and program product | |
CN112533024A (en) | Face video processing method and device and storage medium | |
CN112565178A (en) | Unmanned aerial vehicle power equipment system of patrolling and examining based on streaming media technique | |
CN112132079A (en) | Method, device and system for monitoring students in online teaching | |
CN110661785A (en) | Video processing method, device and system, electronic equipment and readable storage medium | |
CN112261474A (en) | Multimedia video image processing system and processing method | |
CN111898464A (en) | Face recognition method based on 5G network | |
CN205103611U (en) | Intelligence audio video collecting analytical equipment | |
CN116189251A (en) | Real-time face image driving method and device, electronic equipment and storage medium | |
CN117896552B (en) | Video conference processing method, video conference system and related device | |
CN208225072U (en) | A kind of face identification device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |