WO2021109952A1 - Procédé, appareil et serveur d'édition de vidéo et support de stockage lisible par ordinateur - Google Patents

Procédé, appareil et serveur d'édition de vidéo et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021109952A1
WO2021109952A1 PCT/CN2020/132585 CN2020132585W WO2021109952A1 WO 2021109952 A1 WO2021109952 A1 WO 2021109952A1 CN 2020132585 W CN2020132585 W CN 2020132585W WO 2021109952 A1 WO2021109952 A1 WO 2021109952A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target
data
value
preset
Prior art date
Application number
PCT/CN2020/132585
Other languages
English (en)
Chinese (zh)
Inventor
杜中强
谢春燕
张意烽
申武
Original Assignee
成都市喜爱科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都市喜爱科技有限公司 filed Critical 成都市喜爱科技有限公司
Publication of WO2021109952A1 publication Critical patent/WO2021109952A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • This application relates to the field of video processing technology, and in particular to a video editing method, device, server, and computer-readable storage medium.
  • the amusement park will install shooting equipment on the amusement equipment, and use the shooting equipment to shoot the video during the tourists riding on the amusement equipment, and after the shooting is completed, the highlights will be edited from the shot video and the highlights will be added. Stitched into a combined video to help visitors leave the wonderful moments in the play process.
  • the original video is manually watched, and the wonderful moments are selected to edit the combined video. This process requires a lot of manpower and time, resulting in low efficiency.
  • One of the objectives of the present application includes providing a video editing method, device, server, and computer-readable storage medium to improve the inefficiency caused by manual selection of a perfect video moment for video editing.
  • the present application provides a video editing method applied to a server, the server is in communication connection with a shooting device, the shooting device is installed on a sports device, and the shooting device includes a sports data collection device ,
  • the method includes: receiving the original video sent by the shooting device and the motion data of the sports device, wherein the motion data is collected by the motion data collection device; according to the motion data, from the original Select at least one video segment from the video; insert the at least one video segment into a preset video template to obtain a composite video, the video template includes at least one template segment, and the composite video includes at least one video segment and at least one Template fragment.
  • the motion data includes a plurality of data values and a collection time point corresponding to each data value; the step of selecting at least one video segment from the original video according to the motion data, The method includes: obtaining a target selection condition corresponding to the exercise data; according to the target selection condition, locating a target data value from the exercise data, and obtaining a target time point corresponding to the target data value; based on the target time Point, select from the original video before the target time point and/or after the target time point, and continue a video clip with a preset duration to obtain the at least one video clip.
  • the server pre-stores a plurality of device identifications and selection conditions corresponding to each of the device identifications; the step of obtaining the target selection conditions corresponding to the exercise data includes: The motion data is analyzed to determine the target device identifier corresponding to the motion data; according to the target device identifier, the target selection condition corresponding to the motion data is determined from a plurality of selection conditions.
  • the exercise data includes air pressure data
  • the air pressure data includes a plurality of air pressure data values and a collection time point corresponding to each of the air pressure data values
  • the target selection condition includes the air pressure data value reaching Preset air pressure value
  • the step of locating a target data value from the exercise data according to the target selection condition and obtaining the target time point corresponding to the target data value includes: locating from the air pressure data A target air pressure data value that reaches the preset air pressure value is output; and a target time point corresponding to the target air pressure data value is acquired.
  • the exercise data includes air pressure data
  • the air pressure data includes a plurality of air pressure data values and a collection time point corresponding to each air pressure data value
  • the target selection condition includes two adjacent air pressure data values.
  • the air pressure change rate of the air pressure data value reaches the preset air pressure change rate; the step of locating a target data value from the exercise data according to the target selection condition, and obtaining the target time point corresponding to the target data value
  • the method includes: determining, from the plurality of air pressure data values, two adjacent air pressure data values whose air pressure change rate reaches the preset air pressure change rate; and setting the latter of the two adjacent air pressure data values as a target Air pressure data value; obtaining the target time point corresponding to the target air pressure data value.
  • the motion data includes position data
  • the position data includes a plurality of position data values and a collection time point corresponding to each position data value
  • the target selection condition includes a position data value corresponding The preset position
  • the step of locating a target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value includes: locating from the position data The target position data value corresponding to the preset position; and the target time point corresponding to the target position data value is acquired.
  • the motion data includes acceleration data
  • the acceleration data includes a plurality of acceleration data values and a collection time point corresponding to each acceleration data value
  • the target selection condition includes that the acceleration data value reaches Preset acceleration value
  • the step of locating a target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value includes: locating from the acceleration data A target acceleration data value that reaches the preset acceleration value is obtained; and a target time point corresponding to the target acceleration data value is obtained.
  • the motion data includes acceleration data
  • the acceleration data includes a plurality of acceleration data values and a collection time point corresponding to each acceleration data value
  • the target selection condition includes a continuous preset
  • the difference between the number of acceleration data values and the gravitational acceleration is greater than a preset value
  • the target data value is located from the motion data according to the target selection condition, and the target time point corresponding to the target data value is obtained
  • the step includes: determining, from the plurality of acceleration data values, that the difference with the acceleration of gravity is greater than a preset value and a continuous preset number of acceleration data values; and combining the preset number of acceleration data The last of the values is used as the target acceleration data value; the target time point corresponding to the target acceleration data value is obtained.
  • the motion data includes angular velocity data
  • the angular velocity data includes a plurality of angular velocity data values and a collection time point corresponding to each angular velocity data value
  • the target selection condition includes that the angular velocity data reaches a predetermined value.
  • Set the angular velocity value; the step of locating a target data value from the motion data according to the target selection condition and obtaining the target time point corresponding to the target data value includes: locating from the angular velocity data A target angular velocity data value that reaches the preset angular velocity value; and a target time point corresponding to the target angular velocity data value is acquired.
  • the motion data includes angular velocity data
  • the angular velocity data includes a plurality of angular velocity data values and a collection time point corresponding to each angular velocity data value
  • the target selection condition includes at least one angular velocity data
  • the accumulated value of the value within a preset time reaches a preset value; the step of locating a target data value from the exercise data according to the target selection condition, and obtaining the target time point corresponding to the target data value
  • the method includes: determining, from the plurality of angular velocity data values, at least one angular velocity data value at which the accumulated value within the preset time reaches the preset value; and using the last one of the at least one angular velocity data value as the target angular velocity data Value; obtain the target time point corresponding to the target angular velocity data value.
  • the server establishes a human face database in advance, and before the step of selecting at least one video segment from the original video according to the motion data, the method further includes: The video performs face recognition, and the corresponding relationship between the face and the original video is obtained and stored in the face database.
  • the face database includes a correspondence relationship between a face and an original video, and a correspondence relationship between an original video and a synthesized video
  • the server is also communicatively connected to the mobile terminal
  • the at least one After the video clip is inserted into the preset video template to obtain the composite video the method further includes: obtaining a video obtaining request sent by the mobile terminal, wherein the video obtaining request includes a face image; Perform face recognition on the face image to obtain the target face corresponding to the face image; determine the target composite video corresponding to the target face based on the face database; send the target composite video to the A mobile terminal, so that the mobile terminal displays the target composite video.
  • the server establishes a face database in advance, and the server is also communicatively connected to the mobile terminal; before the step of selecting at least one video segment from the original video according to the motion data, The method further includes: performing face recognition on the original video to obtain the corresponding relationship between the face and the original video and store it in the face database; obtaining the video acquisition request sent by the mobile terminal, wherein: The video acquisition request includes a face image; face recognition is performed on the face image to obtain a target face corresponding to the face image; based on the face database, a target corresponding to the target face is determined Original video, and use the target original video as the original video, and perform the step of selecting at least one video segment from the original video according to the motion data; and inserting the at least one video segment into In the preset video template, after the step of obtaining the composite video, the method further includes: sending the composite video corresponding to the target original video to the mobile terminal, so that the mobile terminal displays the target original video corresponding to the target original video. Composite video.
  • the original video is obtained by preprocessing the captured video content by the shooting device; wherein, the step of preprocessing the captured video content by the shooting device includes: the shooting The device performs face detection on the video content to obtain the face area of each video frame in the video content; the shooting device calculates how much the face area of each video frame is in each video frame And delete the video frames with the aspect ratio less than the preset ratio to obtain the original video.
  • the original video is obtained by preprocessing the captured video content by the shooting device; wherein, the step of preprocessing the captured video content by the shooting device includes: the shooting The device performs face detection on the video content to obtain the face area of each video frame in the video content; the shooting device obtains the face pixel size corresponding to the face area of each video frame, and deletes it A video frame whose face pixel size is smaller than a preset minimum face pixel size is used to obtain the original video.
  • the step of preprocessing the captured video content by the photographing device further includes: the photographing device deletes overexposed or blurred video frames in the video content.
  • the video template further includes a credit card and a credit card, the at least one template segment is arranged between the credit card and the credit card;
  • the composite video further includes the credit card and the credit card,
  • the at least one video segment and the at least one template segment are arranged between the title and the trailer;
  • the template segment includes at least one of a template video, a special effect picture, and a subtitle.
  • the present application also provides a video editing device applied to a server, the server is in communication connection with a shooting device, the shooting device is installed on the sports equipment and includes a sports data collection device, the The device includes: a receiving module configured to receive the original video sent by the shooting device and the motion data of the sports device, wherein the motion data is collected by the sports data collection device; and the selection module is configured to be based on the Motion data, selecting at least one video segment from the original video; a processing module configured to insert the at least one video segment into a preset video template to obtain a composite video, the video template including at least one template segment, The composite video includes at least one video segment and at least one template segment.
  • the present application also provides a server, the server includes: one or more processors; a memory configured to store one or more programs, when the one or more programs are When the one or more processors are executed, the one or more processors are caused to implement the above-mentioned video editing method.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned video editing method is realized.
  • the present application provides a video editing method, device, server, and computer-readable storage medium.
  • the original video is captured by a shooting device, and the motion data of the sports device is collected by the motion data collection device.
  • the server receives the shooting After the original video and motion data sent by the device, at least one video segment is selected from the original video according to the motion data, and at least one video segment is inserted into the preset video template to obtain a composite video, which is automatically selected according to the motion data of the sports device.
  • the highlights in the original video are generated into a composite video, which eliminates the need for human participation, which improves the efficiency of video editing.
  • Fig. 1 shows a schematic diagram of an application scenario of a video editing method provided by an embodiment of the present application.
  • Fig. 2 shows a schematic flowchart of a video editing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of sub-steps of step S102 in the video editing method shown in FIG. 2.
  • Fig. 4 shows an example diagram of a composite video provided by an embodiment of the present application.
  • Fig. 5 shows another example diagram of a composite video provided by an embodiment of the present application.
  • Fig. 6 shows another schematic flowchart of a video editing method provided by an embodiment of the present application.
  • Fig. 7 shows another schematic flowchart of a video editing method provided by an embodiment of the present application.
  • Fig. 8 shows a block diagram of functional modules of a video editing device provided by an embodiment of the present application.
  • Fig. 9 shows a block diagram of functional modules of a server provided in an embodiment of the present application.
  • Icon 10-server; 20-shooting equipment; 30-mobile terminal; 21-sports data acquisition device; 11-processor; 12-memory; 13-bus; 100-video editing device; 101-receiving module; 102-selection Module; 103-processing module.
  • FIG. 1 shows a schematic diagram of an application scenario of a video editing method provided by an embodiment of the present application, including a server 10, at least one photographing device 20, and at least one mobile terminal 30.
  • Each photographing device 20 communicates with the server through a network. 10. Communication connection.
  • Each mobile terminal 30 is connected to the server 10 through a network to realize data communication or interaction between the server 10 and the photographing device 20, and between the server 10 and the mobile terminal 30.
  • the photographing equipment 20 is installed on sports equipment.
  • the sports equipment can be equipment for extreme sports or other amusement items.
  • the extreme sports can include extreme cycling, low-altitude parachuting, high-speed racing, diving, downhill skiing, etc.
  • the amusement items can be Including roller coasters, kite flying vehicles, pirate ships, torrents, carousels, etc. The following embodiments take the equipment of amusement projects as an example for description.
  • At least one shooting device 20 is installed on each sports equipment.
  • the shooting device 20 can be a sports camera, a camera, a device equipped with a camera module, etc.; each shooting device 20 includes a sports data collection device 21, a sports data collection device 21 is configured to collect motion data of the motion device when the motion device of the fixed photographing device 20 moves.
  • the motion data collection device 21 may include, but is not limited to, a barometer, a gyroscope, an acceleration sensor, a speed sensor, a gravity sensor, etc.
  • the shooting device 20 is configured to shoot a video for the tourists when the tourists ride on the amusement device to play, and send the original video and the movement data collected by the movement data collecting device 21 to the server 10.
  • the shooting device 20 Since the video content actually shot by the shooting device 20 on the amusement device has a long time and a large amount of data, it may also contain some fragments with poor effects, for example, there are no human faces or incomplete faces in the picture. Therefore, in order to shorten the time and To reduce the amount of data transmission, the shooting device 20 needs to preprocess the actually shot video content to obtain the original video, that is, the original video is obtained after the shooting device 20 preprocesses the actually shot video content.
  • the process of preprocessing the captured video content by the shooting device 20 may include: first, the shooting device 20 performs face detection on the video content to obtain the face area of each video frame in the video content; then, shooting The device 20 calculates the aspect ratio of the face area of each video frame in each video frame, and deletes the video frames whose aspect ratio is less than a preset ratio (for example, 10%) to obtain the original video. That is, the shooting device 20 deletes the video frames whose face ratio is lower than the preset ratio in the screen according to a preset ratio (for example, 10%), so as to cut out no face or face incompleteness or face angle. Video clips of poor waiting.
  • a preset ratio for example, 10%
  • the process of preprocessing the captured video content by the shooting device 20 may include: first, the shooting device 20 performs face detection on the video content to obtain the face area of each video frame in the video content; then, shooting The device 20 obtains the face pixel size corresponding to the face area of each video frame, and deletes the video frame whose face pixel size is less than the preset minimum face pixel size to obtain the original video. That is, the shooting device 20 preprocesses the captured video content according to the preset minimum face pixel size.
  • the face frame that the shooting device 20 can detect is a square
  • the minimum side length of the square is the short video frame.
  • the minimum value is not less than 48 pixels, for example, if the video frame is 4096*3200 pixels, the minimum face pixel size is 66*66 pixels, and the face pixel size is lower than the minimum face pixel size Video frame deletion.
  • the process of preprocessing the captured video content by the shooting device 20 may also include quality screening, that is, the shooting device 20 deletes overexposed or blurred video frames in the video content.
  • the server 10 is configured to receive the original video and motion data sent by the shooting device 20, select highlights from the original video according to the motion data, and insert the highlights into a preset video template to obtain a composite video; at the same time, the server 10 is also configured Upon receiving the video acquisition request sent by the mobile terminal 30, the target composite video corresponding to the video acquisition request is acquired, and the target composite video is sent to the mobile terminal 30.
  • the server 10 may be a single server or a server group.
  • the mobile terminal 30 is configured to receive the target composite video sent by the server 10, and display the target composite video for the user to select.
  • the mobile terminal 30 may be, but is not limited to, a smart phone, a tablet computer, a portable notebook computer, a desktop computer, and the like.
  • a third-party application (Application, APP) is installed in the mobile terminal 30.
  • the third-party application can run a small program, and the user can interact with the server 10 through the small program. For example, after the user rides on the amusement device, You can watch or download your own play videos through this small program.
  • the applet can obtain the user's face image, and match the face image with the composite video in the server 10 to obtain The target composite video with the user as the protagonist is displayed, and the target composite video is displayed for the user to watch, download, etc.
  • an application can also be installed in the mobile terminal 30, so that the user can interact with the server 10 through the application to realize the viewing and downloading of the target composite video with the user as the protagonist.
  • FIG. 2 shows a schematic flowchart of a video editing method provided by an embodiment of the present application.
  • the video editing method is applied to the server 10 and may include the following steps:
  • S101 Receive the original video sent by the shooting equipment and the motion data of the sports equipment, where the motion data is collected by the motion data collection device.
  • the shooting device 20 is fixedly installed on the sports equipment, and it is ensured that one shooting device 20 can capture the video of at least one user riding on the sports equipment.
  • the sports equipment is a roller coaster
  • one shooting device 20 is provided for each row of seats. This allows the shooting device 20 to collect a row of users' play videos.
  • the shooting device 20 is always on.
  • the shooting device 20 captures the user's original video
  • the motion data collection device 21 collects the motion data of the sports device.
  • the original video and the time of the motion data correspond, that is, the same
  • the original video frame and the motion data value at a time point are corresponding.
  • the original video is obtained by preprocessing the captured video content by the shooting device 20, and the original video is based on at least one specific user as the protagonist. For example, a row of seats of a roller coaster is provided with a shooting device 20, and for one of the shooting devices 20 The original video taken, and the original video takes one or two users in a row of seats corresponding to the shooting device 20 as the protagonists.
  • the original video can be a complete video or several independent small videos. Generally, the length of the original video can be 30s-1min.
  • S102 Select at least one video segment from the original video according to the motion data.
  • the motion data collected by the motion data collection device 21 may be one or more of air pressure data, position data, acceleration data, velocity data, angular velocity data, gravity data, etc., and at the same time, the motion data includes multiple data values. And the collection time point corresponding to each data value.
  • the video clips selected from the original videos can include, but are not limited to, the video clips when the sports equipment just starts to move or the video clips that can reflect the characteristics of the sports equipment. For example, if the sports equipment is a roller coaster, a pirate ship, etc. Equipment, you can select the video clip when the roller coaster climbs to the apex and starts to dive; if the sports equipment is amusement equipment with large acceleration changes such as rapids, jumpers, etc., you can select the video clip at the sudden acceleration; the sports equipment is a carousel Rotating amusement equipment such as, flying chairs, etc., you can select video clips when arriving at a designated location (for example, a location with beautiful scenery, a wide field of view, etc.). Obviously, these video clips can be selected according to the movement data during the running of the sports device. For example, if the sports device is a roller coaster, the video clips with the largest or smallest gravity value can be selected.
  • the method of selecting at least one video segment from the original video may include: firstly, obtaining a data value in the motion data that reaches a set value, or comparing with the previous/next data value. A data value that changes significantly; then, the acquisition time point corresponding to the data value is obtained as a reference point, and a video clip of a certain length of time (for example, 5-15s, etc.) before and/or after the reference point is selected from the original video.
  • the set value can be a set air pressure value, a set acceleration value, a set angular velocity value, etc.
  • the server 10 receives the original video and motion data sent by the shooting device 20, if a certain data value in the motion data satisfies the air pressure
  • the video clip is selected according to the time point corresponding to the data value.
  • S102 may include:
  • the target selection condition refers to the condition for locating the target data value from the motion data.
  • the acquisition time point corresponding to the target data value can be used as a reference point, and the original A video clip of a certain length of time (for example, 5-15s, etc.) before and/or after the reference point is selected in the video.
  • the process of obtaining target selection conditions corresponding to the exercise data may include:
  • the server 10 pre-stores multiple device identifications and selection conditions corresponding to each device identification.
  • the device identifications are configured to represent sports equipment, different sports equipments correspond to different equipment identifications, and the same sports equipment corresponds to the same equipment identification.
  • the equipment identification of the roller coaster is 1
  • the equipment identification of torrential advancement is 2
  • the equipment identification of the carousel is 3, etc.
  • the corresponding selection conditions are: the air pressure data value reaches the preset air pressure value
  • the acceleration data value reaches the preset acceleration
  • the value, position data value corresponds to the preset position, etc., as shown in Table 1 below:
  • the air pressure data value reaches the preset air pressure value 2
  • the acceleration data value reaches the preset acceleration value 3
  • the position data value corresponds to the preset position
  • the server 10 After the server 10 receives the original video and motion data sent by the shooting device 20, it first analyzes the motion data and determines the target device identifier corresponding to the motion data, for example, the target device identifier is 2.
  • the target selection condition corresponding to the motion data is determined from the multiple selection conditions.
  • the server 10 After the server 10 obtains the target device identifier corresponding to the motion data, it can obtain the target selection condition corresponding to the target device identifier. For example, if the target device identifier is 2, the target selection condition is that the acceleration data value reaches the preset acceleration value.
  • the process of obtaining target selection conditions corresponding to the exercise data may include:
  • the server 10 pre-stores multiple equipment types and selection conditions corresponding to each equipment type.
  • the equipment type refers to the type of each sports equipment.
  • Amusement equipment in the amusement park can be classified in advance to obtain all equipment types, for example, amusement park
  • the amusement equipment in, including roller coasters, pirate ships, torrents, jumping machines, carousels and flying chairs, you can get 3 types of equipment: equipment with large changes in height, equipment with large acceleration changes, and rotating equipment; then, set each A selection condition corresponding to each type of equipment, for example, equipment with large height changes, equipment with large acceleration changes, and rotating equipment.
  • the selection conditions corresponding to the three types of equipment are: air pressure data value reaches the preset air pressure value, acceleration The data value reaches the preset acceleration value, the position data value corresponds to the preset position, etc., as shown in Table 2 below:
  • the air pressure data value reaches the preset air pressure value Equipment with large acceleration changes
  • the acceleration data value reaches the preset acceleration value Rotating equipment
  • the position data value corresponds to the preset position
  • the server 10 After the server 10 obtains the motion data sent by the photographing device 20, it may obtain the target device type corresponding to the motion data based on the device types that have been classified.
  • the target selection condition corresponding to the motion data is determined from the multiple selection conditions.
  • the server 10 After the server 10 obtains the target device type corresponding to the exercise data, it can obtain the target selection condition corresponding to the target device type. For example, if the target device type is a device with a large altitude change, the target selection condition is that the air pressure data value reaches the preset air pressure. value.
  • the selection conditions corresponding to some equipment identifications or equipment types can include multiple, for example, the equipment identification of a roller coaster is 1, the equipment type is a device with a large height change, and the corresponding selection condition is: the air pressure data value reaches the expected value. Set the air pressure value, the gravity data value to reach the preset gravity value, and the acceleration data value to reach the preset acceleration value, etc. At the same time, in order to avoid abnormalities, when there are multiple selection conditions, you can set the priority for each selection condition according to the corresponding device ID or device type.
  • the device ID of a roller coaster is 1, and the corresponding selection condition is: air pressure
  • the data value reaches the preset air pressure value
  • the gravity data value reaches the preset gravity value
  • the acceleration data value reaches the preset acceleration value
  • the priority is sequentially reduced, and the target selection conditions are determined according to the priority during execution.
  • S1022 According to the target selection condition, locate the target data value from the motion data, and obtain the target time point corresponding to the target data value.
  • the target selection condition may include that the air pressure data value reaches the preset air pressure value, or the air pressure change rate of two adjacent air pressure data values reaches the preset air pressure change rate, or the position data value corresponds to the preset position, or the acceleration data value
  • the preset acceleration value is reached, or the difference between the consecutive preset number of acceleration data values and the gravitational acceleration is greater than the preset value, or the angular velocity data reaches the preset angular velocity value, or the accumulation of at least one angular velocity data value within the preset time The value reaches the preset value, etc.
  • the air pressure data may be collected by a barometer.
  • the air pressure data includes multiple air pressure data values and the collection time point corresponding to each air pressure data value.
  • the target selection condition is: the air pressure data value reaches The preset air pressure value, in this case, optionally, S1022 may include:
  • S1022-1 Locate a target air pressure data value that reaches a preset air pressure value from the air pressure data.
  • the preset air pressure value can be the air pressure value at which the sports equipment runs to the highest point or the lowest point, or the air pressure value when the sports equipment just starts to run.
  • S1022-2 Obtain a target time point corresponding to the target air pressure data value.
  • the air pressure data may be collected by a barometer.
  • the air pressure data includes multiple air pressure data values and the collection time points corresponding to each air pressure data value.
  • the target selection condition is: two adjacent ones
  • the air pressure change rate of the air pressure data value reaches the preset air pressure change rate.
  • S1022 may include:
  • S1022-3 Determine two adjacent air pressure data values whose air pressure change rate reaches a preset air pressure change rate from the multiple air pressure data values.
  • the air pressure change rate may be the increase value of the next air pressure data value compared to the previous air pressure data value in two adjacent air pressure data values.
  • the barometer collects data every 0.2s, so the collection time point at which the barometric pressure value changes the most can be used as the target time point, which usually corresponds to the lower position in the middle of the downslope.
  • the sports equipment is a roller coaster
  • each row of shooting devices 20 arranged on the roller coaster can determine the target time point in this way.
  • taking the maximum collection time point as the target time point can effectively avoid the situation that the roller coaster stops due to a failure of the roller coaster, which causes the target time point to be abnormal.
  • the air pressure data value detected by the barometer can be used to determine the target air pressure data value.
  • the barometer detects that the air pressure has decreased by 0.18 hpa within 10 seconds (equivalent to the rise of the roller coaster 2m), the reduced air pressure data value will be used as the target air pressure data value.
  • the position data when the motion data includes position data, the position data may be collected by a gyroscope.
  • the position data includes multiple position data values and the collection time points corresponding to each position data value.
  • the target selection condition is: the position data value corresponds to the preset Set the location.
  • S1022 may include:
  • S1022-6 Locate the target position data value corresponding to the preset position from the position data.
  • the preset location can be a location with beautiful scenery or a wide field of view that is convenient for taking pictures, or it can be a location when the sports equipment just starts running.
  • the acceleration data when the motion data includes acceleration data, the acceleration data may be collected by an acceleration sensor, and the acceleration data includes multiple acceleration data values and the collection time point corresponding to each acceleration data value.
  • the target selection condition is: the acceleration data value reaches the preset value. Set the acceleration value.
  • S1022 may include:
  • S1022-8 Locate a target acceleration data value that reaches a preset acceleration value from the acceleration data.
  • the preset acceleration may be the acceleration value when the sports equipment suddenly accelerates or decelerates, or it may be the acceleration value when the sports equipment just starts to run.
  • the acceleration data when the motion data includes acceleration data, the acceleration data may be collected by an acceleration sensor.
  • the acceleration data includes multiple acceleration data values and the collection time point corresponding to each acceleration data value.
  • the target selection condition is: a continuous preset number The difference between each of the acceleration data values and the acceleration of gravity is greater than the preset value.
  • S1022 may include:
  • S1022-10 From the multiple acceleration data values, it is determined that the difference between the acceleration data and the gravitational acceleration is greater than the preset value and the continuous preset number of acceleration data values are determined.
  • the maximum difference between the acceleration data value detected by the acceleration sensor and the gravitational acceleration g exceeds 1m/s 2 and reaches 6 times in a row.
  • the acceleration data corresponding to the sixth time will be regarded as the target acceleration data value.
  • the angular velocity data may be collected by a gyroscope.
  • the angular velocity data includes multiple angular velocity data values and the collection time point corresponding to each angular velocity data value.
  • the target selection condition is: the angular velocity data reaches a preset value
  • the angular velocity value optionally, in this case, S1022 may include:
  • the preset angular velocity value can be the angular velocity value of the sports equipment running to the farthest or the nearest to the rotating shaft, or it can be the air pressure value when the sports equipment just starts to operate.
  • the angular velocity data may be collected by a gyroscope.
  • the angular velocity data includes multiple angular velocity data values and the collection time point corresponding to each angular velocity data value.
  • the target selection condition is: at least one angular velocity data value The accumulated value within the preset time reaches the preset value.
  • S1022 may include:
  • S1022-15 Determine, from the multiple angular velocity data values, at least one angular velocity data value at which the accumulated value within the preset time reaches the preset value.
  • the last of the at least one angular velocity data value is used as the target angular velocity data value.
  • the angular velocity data value change detected by the gyroscope can be used to determine the target angular velocity data value.
  • the angular velocity data value of the carousel detected by the gyroscope is 20s If the accumulated value of at least one angular velocity data value reaches 1.2 rad/s, the last angular velocity data value within 20s is taken as the target angular velocity data value.
  • S1023 Based on the target time point, select a video clip with a preset duration before and/or after the target time point from the original video, to obtain at least one video clip.
  • the target data value that meets the preset conditions is located from the motion data, and after the target time point (for example, 12:01) corresponding to the target data value is obtained, due to the collection time of the motion data and the original video The shooting time is corresponding. Therefore, according to the target time point (for example, 12:01), the target time point (for example, 12:01) or before the target time point (for example, 12:01) can be selected from the original video.
  • a video clip of a preset duration for example, 10s
  • the preset duration can usually be set to 1 to 5s, and the cumulative duration of at least one video segment can be 10-20s, which can be flexibly set according to the actual situation, and is not limited here.
  • S103 Insert at least one video segment into a preset video template to obtain a composite video, where the video template includes at least one template segment, and the composite video includes at least one video segment and at least one template segment.
  • the number of the M blanks is consistent with the number of the selected at least one video clip
  • the cumulative duration of the M blanks is consistent with the cumulative duration of the selected at least one video clip, so that at least one video clip can be Insert into the M blanks to obtain a composite video.
  • the template fragments may be, but are not limited to, template videos, special effects pictures, subtitles, etc.
  • the template videos may be aerial videos of amusement parks or amusement equipment, for example, scenery videos of amusement parks, aerial videos of roller coaster tracks, and so on.
  • the M blanks in the video template can be set according to the rhythm of the background music, for example, please refer to Figure 4, according to the rhythm of the background music, determine at least one transition point, and set the video template according to the transition point For the blank position, one transition point corresponds to one blank.
  • the composite video includes at least one video segment and at least one template segment.
  • the composite video may also include a credit and at least one template segment, and the at least one video segment and at least one template segment are arranged between the credit and the trailer.
  • At least one video segment and at least one template segment in the composite video can be set at intervals, where the interval setting can be a one-to-one interval setting, for example, video segment 1, video segment 2, video segment 3, template segment in Figure 4 1.
  • Template fragment 2 one-to-one interval setting; it can also be one-to-many, many-to-one, many-to-many interval setting, for example, please refer to Figure 5, video fragment 1, video fragment 2, template fragment 1 one-to-two interval Set up.
  • the specific way of setting the interval can be flexibly set according to the actual situation, which is not limited here.
  • Fig. 6 is another flow diagram of the video editing method provided by this application. Please refer to Fig. 6. Before S102, the video editing method further includes:
  • S111 Perform face recognition on the original video, obtain the corresponding relationship between the face and the original video, and store it in the face database.
  • the server 10 may perform face recognition on the original video, for example, using a pre-trained face recognition model for face recognition to obtain the correspondence between the face and the original video .
  • the face can be represented by a multi-dimensional vector, for example, a 128-dimensional vector.
  • the server 10 can perform face detection on this piece of original video, and select the frame with the largest proportion of faces as the reference frame;
  • the reference frame performs face recognition, and other frames are matched with the reference frame. If they do not match, they are discarded.
  • the new original video frame sent by the photographing device 20 is subsequently received, and the reference frame can also be matched with the reference frame.
  • face matching repeat the above steps to establish a binding relationship between the original video and the face, that is, one face corresponds to one original video.
  • the reference frame is not static. If there is a video frame corresponding to the same face as the reference frame, and the proportion of the face of the video frame is greater than the proportion of the face of the reference frame, then the video frame will be used as the new reference frame .
  • the server 10 sequentially performs video editing, that is, executes S102 to S103 to obtain the synthesized video corresponding to each original video, that is, the synthesized video corresponding to each human face, and combines the original video and the synthesized video.
  • the corresponding relationship of the video is also stored in the face database, that is, the face database includes the corresponding relationship between the face and the original video, and the corresponding relationship between the original video and the synthesized video.
  • the video editing method further includes:
  • S112 Acquire a video acquisition request sent by the mobile terminal, where the video acquisition request includes a face image.
  • the applet or application program will first prompt the user to upload a face image, and the user can take a selfie through the mobile terminal 30 and upload the face image to the server 10 through the applet or application program.
  • S113 Perform face recognition on the face image to obtain a target face corresponding to the face image.
  • the server 10 after the server 10 receives the video acquisition request sent by the user through the mobile terminal 30, it can perform a face search on the face image in the video acquisition request, and determine the target composite video corresponding to the face in the face image. .
  • the server 10 may perform face recognition on the face image, for example, use a pre-trained face recognition model to perform face recognition to obtain a target face corresponding to the face image.
  • Vector representation for example, a 128-dimensional vector.
  • the server 10 performs face recognition on the face image sent by the mobile terminal 30, and after obtaining the target face corresponding to the face image, it may be based on the target face and the face and original face stored in the face database in advance.
  • the corresponding relationship of the video and the corresponding relationship between the original video and the synthesized video determine the target synthesized video corresponding to the target face.
  • S115 Send the target composite video to the mobile terminal, so that the mobile terminal displays the target composite video.
  • the server 10 can send the target composite video to the mobile terminal 30, for example, to a small program running under a third-party application installed in the mobile terminal 30, or to a small program installed in the mobile terminal 30.
  • the server 10 can send the target composite video to the mobile terminal 30, for example, to a small program running under a third-party application installed in the mobile terminal 30, or to a small program installed in the mobile terminal 30.
  • users can watch or download their own target composite video.
  • FIG. 7 is a schematic diagram of another flow chart of the video editing method provided by this application. Please refer to FIG. 7.
  • the video editing method further includes:
  • S121 Perform face recognition on the original video to obtain the correspondence between the face and the original video and store it in the face database, that is, the face database includes the correspondence between the face and the original video.
  • S122 Acquire a video acquisition request sent by the mobile terminal, where the video acquisition request includes a face image.
  • S123 Perform face recognition on the face image to obtain a target face vector corresponding to the face image.
  • the video editing method further includes:
  • S125 Send the composite video corresponding to the target original video to the mobile terminal, so that the mobile terminal displays the composite video corresponding to the target original video.
  • this application can automatically select highlights in the original video to generate a synthesized video based on the motion data of the sports equipment, thereby eliminating the need for manual participation and improving the efficiency of video editing;
  • the filming device 20 preprocesses the actual filmed video content to obtain the original video, thereby shortening the time period and reducing the amount of data transmission, and reducing the requirements for bandwidth and storage space;
  • the original video content shot by the shooting device 20 is single, boring, and not beautiful enough.
  • the server 10 selects highlights from the original video according to the motion data, and inserts the highlights into the preset video template to obtain rich and interesting images.
  • the composite video of background music, special effects pictures, landscape pictures, etc. makes the video content rich and vivid.
  • FIG. 8 shows a block diagram of functional modules of a video editing apparatus 100 provided by an embodiment of the present application.
  • the video editing device 100 is applied to the server 10, and the video editing device 100 includes: a receiving module 101, a selecting module 102 and a processing module 103.
  • the receiving module 101 is configured to receive the original video sent by the shooting device and the motion data of the sports device, where the motion data is collected by the sports data collection device.
  • the original video is obtained by preprocessing the captured video content by the shooting device 20; wherein, the step of preprocessing the captured video content by the shooting device 20 includes: the shooting device performs face detection on the video content to obtain The face area of each video frame in the video content; the shooting device calculates the aspect ratio of the face area of each video frame in each video frame, and deletes the video frames whose aspect ratio is less than the preset ratio to obtain the original video.
  • the original video is obtained by preprocessing the captured video content by the shooting device 20; wherein, the step of preprocessing the captured video content by the shooting device 20 includes: the shooting device performs face detection on the video content to obtain The face area of each video frame in the video content; the shooting device obtains the face pixel size corresponding to the face area of each video frame, and deletes the video frame whose face pixel size is less than the preset minimum face pixel size to obtain Original video.
  • the step of preprocessing the captured video content by the shooting device 20 further includes: the shooting device deletes overexposed or blurred video frames in the video content.
  • the selecting module 102 is configured to select at least one video segment from the original video according to the motion data.
  • the exercise data includes a plurality of data values and a collection time point corresponding to each data value; the selection module 102 is configured to: obtain target selection conditions corresponding to the exercise data; and locate the target data from the exercise data according to the target selection conditions And obtain the target time point corresponding to the target data value; based on the target time point, select a video clip with a preset duration before and/or after the target time point from the original video to obtain at least one video clip.
  • the server 10 pre-stores multiple device identifications and selection conditions corresponding to each device identification; the selection module 102 executes the method of acquiring the target selection conditions corresponding to the exercise data, including: analyzing the exercise data to determine the exercise data Corresponding target device identification; according to the target device identification, the target selection condition corresponding to the motion data is determined from a plurality of selection conditions.
  • the exercise data includes air pressure data
  • the air pressure data includes a plurality of air pressure data values and a collection time point corresponding to each air pressure data value
  • the target selection condition includes that the air pressure data value reaches a preset air pressure value
  • the selection module 102 executes the method of locating the target data value from the exercise data according to the target selection conditions, and obtaining the target time point corresponding to the target data value, including: locating the target air pressure data value that reaches the preset air pressure value from the air pressure data ; Obtain the target time point corresponding to the target air pressure data value.
  • the exercise data includes air pressure data
  • the air pressure data includes a plurality of air pressure data values and the collection time point corresponding to each air pressure data value
  • the target selection condition includes that the air pressure change rate of two adjacent air pressure data values reaches a preset air pressure change rate
  • the selection module 102 executes the method of locating the target data value from the exercise data according to the target selection condition, and obtaining the target time point corresponding to the target data value, including: determining the air pressure change from a plurality of air pressure data values The two adjacent air pressure data values whose rate reaches the preset air pressure change rate; the latter of the two adjacent air pressure data values is used as the target air pressure data value; and the target time point corresponding to the target air pressure data value is obtained.
  • the motion data includes position data
  • the position data includes a plurality of position data values and a collection time point corresponding to each position data value
  • the target selection condition includes a position data value corresponding to a preset position
  • the selection module 102 executes the selection according to the target selection condition .
  • the method of locating the target data value from the motion data and obtaining the target time point corresponding to the target data value includes: locating the target position data value corresponding to the preset position from the position data; obtaining the target corresponding to the target position data value Point in time.
  • the motion data includes acceleration data
  • the acceleration data includes a plurality of acceleration data values and a collection time point corresponding to each acceleration data value
  • the target selection condition includes that the acceleration data value reaches a preset acceleration value
  • the selection module 102 executes the method of locating the target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value, including: locating the target acceleration data value that reaches the preset acceleration value from the acceleration data ; Obtain the target time point corresponding to the target acceleration data value.
  • the motion data includes acceleration data
  • the acceleration data includes a plurality of acceleration data values and the collection time point corresponding to each acceleration data value
  • the target selection condition includes a continuous preset number of acceleration data values and the difference between the acceleration data and the acceleration of gravity. Greater than the preset value
  • the selection module 102 executes the method of locating the target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value, including: determining the difference between the acceleration data and the gravitational acceleration The difference is greater than the preset value and a continuous preset number of acceleration data values; the last of the preset number of acceleration data values is used as the target acceleration data value; and the target time point corresponding to the target acceleration data value is obtained.
  • the motion data includes angular velocity data
  • the angular velocity data includes multiple angular velocity data values and a collection time point corresponding to each angular velocity data value
  • the target selection condition includes that the angular velocity data reaches a preset angular velocity value
  • the selection module 102 executes the method of locating the target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value, including: locating the target angular velocity that reaches the preset angular velocity value from the angular velocity data Data value: Get the target time point corresponding to the target angular velocity data value.
  • the motion data includes angular velocity data
  • the angular velocity data includes a plurality of angular velocity data values and the collection time point corresponding to each angular velocity data value
  • the target selection condition includes that the accumulated value of at least one angular velocity data value within a preset time reaches a preset value value
  • the selection module 102 executes the method of locating the target data value from the motion data according to the target selection condition, and obtaining the target time point corresponding to the target data value, including: determining the preset time from a plurality of angular velocity data values At least one angular velocity data value whose accumulated value reaches the preset value; the last one of the at least one angular velocity data value is used as the target angular velocity data value; and the target time point corresponding to the target angular velocity data value is obtained.
  • the processing module 103 is configured to insert at least one video segment into a preset video template to obtain a composite video, the video template includes at least one template segment, and the composite video includes at least one video segment and at least one template segment.
  • the video template further includes a credit card and a credit card, and at least one template segment is set between the credit card and the credit card;
  • the composite video further includes a credit card and a credit card, and at least one video segment and at least one template fragment are located between the credit card and the credit card.
  • the template segment includes at least one of a template video, a special effect picture, and a subtitle.
  • the server 10 establishes a face database in advance, and the processing module 103 is further configured to perform face recognition on the original video to obtain the corresponding relationship between the face and the original video and store it in the face database.
  • the face database includes the correspondence between the face and the original video, and the correspondence between the original video and the synthesized video;
  • the processing module 103 is also configured to: acquire a video acquisition request sent by the mobile terminal, where the video acquisition request includes a face image; perform face recognition on the face image to obtain a target face corresponding to the face image; based on the face database, Determine the target composite video corresponding to the target face; send the target composite video to the mobile terminal so that the mobile terminal displays the target composite video.
  • the server establishes a face database in advance
  • the processing module 103 is further configured to: perform face recognition on the original video to obtain the correspondence between the face and the original video and store it in the face database; obtain the video sent by the mobile terminal.
  • Request where the video acquisition request includes a face image; face recognition is performed on the face image to obtain the target face corresponding to the face image; based on the face database, the target original video corresponding to the target face is determined, and the target The original video is used as the original video, and the step of selecting at least one video segment from the original video according to the motion data is performed; the composite video corresponding to the target original video is sent to the mobile terminal, so that the mobile terminal displays the composite video corresponding to the target original video .
  • FIG. 9 shows a block diagram of functional modules of the server 10 provided by an embodiment of the present application.
  • the server 10 includes a processor 11, a memory 12 and a bus 13, and the processor 11 is connected to the memory 12 through the bus 13.
  • the memory 12 is configured to store programs, such as the video editing device 100 shown in FIG. 8.
  • the video editing device 100 includes at least one operating system that can be stored in the memory 12 in the form of software or firmware or solidified in the server 10. , The software function module in the OS).
  • the processor 11 executes the program to implement the video editing method disclosed in the foregoing embodiment.
  • the memory 12 may include a high-speed random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (NVM).
  • RAM Random Access Memory
  • NVM non-volatile memory
  • the processor 11 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the above method can be completed by an integrated logic circuit of hardware in the processor 11 or instructions in the form of software.
  • the aforementioned processor 11 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a microcontroller unit (Microcontroller Unit, MCU), a complex programmable logic device (Complex Programmable Logic Device, CPLD), and an on-site programmable logic device (CPLD). Programmable gate array (Field-Programmable Gate Array, FPGA), embedded ARM and other chips.
  • CPU Central Processing Unit
  • MCU microcontroller Unit
  • CPLD Complex Programmable Logic Device
  • CPLD on-site programmable logic device
  • Programmable gate array Field-Programmable Gate Array, FPGA
  • embedded ARM embedded ARM
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by the processor 11 to implement the video editing method disclosed in the above-mentioned embodiment.
  • the present application provides a video editing method, device, server, and computer-readable storage medium.
  • the method includes: receiving the original video sent by the shooting device and the motion data of the sports device, where the motion data is sports It is collected by a data collection device; at least one video segment is selected from the original video according to the motion data; at least one video segment is inserted into a preset video template to obtain a composite video, the video template includes at least one template segment, and the composite video includes at least One video segment and at least one template segment.
  • the present application can automatically select highlights in the original video to generate a composite video according to the motion data of the sports equipment, thereby eliminating the need for manual participation and improving the efficiency of video editing.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the module, program segment, or part of the code contains one or more functions for realizing the specified logical function. Executable instructions. It should also be noted that in some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the functional modules in the various embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
  • the function is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.
  • ROM read-only memory
  • RAM random access memory
  • relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply one of these entities or operations. There is any such actual relationship or order between.
  • the terms “include”, “include” or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, but also includes those that are not explicitly listed Other elements of, or also include elements inherent to this process, method, article or equipment. If there are no more restrictions, the element defined by the sentence "including a" does not exclude the existence of other identical elements in the process, method, article, or equipment that includes the element
  • This application provides a video editing method, device, server, and computer-readable storage medium.
  • the method includes: receiving the original video sent by the shooting device and the motion data of the sports device, wherein the motion data is collected by the sports data collection device ; According to the motion data, select at least one video segment from the original video; insert at least one video segment into a preset video template to obtain a composite video, the video template includes at least one template segment, the composite video includes at least one video segment and at least A template fragment.
  • the present application can automatically select highlights in the original video to generate a composite video according to the motion data of the sports equipment, thereby eliminating the need for manual participation and improving the efficiency of video editing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

La présente demande relève du domaine technique du traitement de vidéo et concerne un procédé, un appareil et un serveur d'édition de vidéo, ainsi qu'un support de stockage lisible par ordinateur. Le procédé consiste à : recevoir une vidéo d'origine envoyée par un dispositif de capture d'image et des données de mouvement d'un dispositif mobile, les données de mouvement étant collectées par un appareil de collecte de données de mouvement ; en fonction des données de mouvement, sélectionner au moins une séquence vidéo à partir de la vidéo d'origine ; insérer la ou les séquences vidéo dans un modèle vidéo prédéfini pour obtenir une vidéo synthétisée, le modèle vidéo comprenant au moins une séquence modèle et la vidéo synthétisée comprenant l'au moins une séquence vidéo et l'au moins une séquence modèle. Dans la présente demande, une mise en évidence dans la vidéo d'origine est automatiquement sélectionnée en fonction des données de mouvement du dispositif mobile de façon à générer une vidéo synthétisée, il n'y a donc pas besoin d'une participation manuelle et l'efficacité de l'édition vidéo est améliorée.
PCT/CN2020/132585 2019-12-05 2020-11-30 Procédé, appareil et serveur d'édition de vidéo et support de stockage lisible par ordinateur WO2021109952A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231580.4 2019-12-05
CN201911231580.4A CN110996112A (zh) 2019-12-05 2019-12-05 视频编辑方法、装置、服务器及存储介质

Publications (1)

Publication Number Publication Date
WO2021109952A1 true WO2021109952A1 (fr) 2021-06-10

Family

ID=70090256

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132585 WO2021109952A1 (fr) 2019-12-05 2020-11-30 Procédé, appareil et serveur d'édition de vidéo et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN110996112A (fr)
WO (1) WO2021109952A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131648A1 (fr) * 2022-12-21 2024-06-27 北京字跳网络技术有限公司 Procédé de découpage vidéo, appareil, dispositif électronique, et support de stockage lisible

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110996112A (zh) * 2019-12-05 2020-04-10 成都市喜爱科技有限公司 视频编辑方法、装置、服务器及存储介质
CN111654619A (zh) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 智能拍摄方法、装置、服务器及存储介质
CN112040278A (zh) * 2020-09-16 2020-12-04 成都市喜爱科技有限公司 视频处理方法、装置、拍摄终端、服务器及存储介质
CN112203142A (zh) * 2020-12-03 2021-01-08 浙江岩华文化科技有限公司 视频的处理方法、装置、电子装置和存储介质
CN112702650A (zh) * 2021-01-27 2021-04-23 成都数字博览科技有限公司 一种献血推广方法和献血车
CN115119044B (zh) * 2021-03-18 2024-01-05 阿里巴巴新加坡控股有限公司 视频处理方法、设备、***及计算机存储介质
CN114500826B (zh) * 2021-12-09 2023-06-27 成都市喜爱科技有限公司 一种智能拍摄方法、装置及电子设备
CN114363712B (zh) * 2022-01-13 2024-03-19 深圳迪乐普智能科技有限公司 基于模板化编辑的ai数字人视频生成方法、装置及设备
CN115103206B (zh) * 2022-06-16 2024-02-13 北京字跳网络技术有限公司 视频数据的处理方法、装置、设备、***及存储介质
CN115278299B (zh) * 2022-07-27 2024-03-19 腾讯科技(深圳)有限公司 无监督的训练数据生成方法、装置、介质及设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050052532A1 (en) * 2003-09-08 2005-03-10 David Elooz System and method for filming and recording attractions
CN204425487U (zh) * 2015-03-17 2015-06-24 百度在线网络技术(北京)有限公司 运动摄像头装置、自行车和骑行***
CN107281709A (zh) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 一种运动视频片段的提取方法及装置、电子设备
CN108694737A (zh) * 2018-05-14 2018-10-23 星视麒(北京)科技有限公司 制作图像的方法和装置
CN108769560A (zh) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 一种高速环境下模式化数字影像的制作方法
CN110121105A (zh) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 剪辑视频生成方法及装置
CN110418073A (zh) * 2019-07-22 2019-11-05 富咖科技(大连)有限公司 一种用于卡丁车运动的视频自动采集与合成方法
CN110996112A (zh) * 2019-12-05 2020-04-10 成都市喜爱科技有限公司 视频编辑方法、装置、服务器及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0535178A (ja) * 1991-07-26 1993-02-12 Pioneer Electron Corp 記録・再生装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050052532A1 (en) * 2003-09-08 2005-03-10 David Elooz System and method for filming and recording attractions
CN204425487U (zh) * 2015-03-17 2015-06-24 百度在线网络技术(北京)有限公司 运动摄像头装置、自行车和骑行***
CN107281709A (zh) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 一种运动视频片段的提取方法及装置、电子设备
CN110121105A (zh) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 剪辑视频生成方法及装置
CN108694737A (zh) * 2018-05-14 2018-10-23 星视麒(北京)科技有限公司 制作图像的方法和装置
CN108769560A (zh) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 一种高速环境下模式化数字影像的制作方法
CN110418073A (zh) * 2019-07-22 2019-11-05 富咖科技(大连)有限公司 一种用于卡丁车运动的视频自动采集与合成方法
CN110996112A (zh) * 2019-12-05 2020-04-10 成都市喜爱科技有限公司 视频编辑方法、装置、服务器及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131648A1 (fr) * 2022-12-21 2024-06-27 北京字跳网络技术有限公司 Procédé de découpage vidéo, appareil, dispositif électronique, et support de stockage lisible

Also Published As

Publication number Publication date
CN110996112A (zh) 2020-04-10

Similar Documents

Publication Publication Date Title
WO2021109952A1 (fr) Procédé, appareil et serveur d'édition de vidéo et support de stockage lisible par ordinateur
CN109326310B (zh) 一种自动剪辑的方法、装置及电子设备
US11516557B2 (en) System and method for enhanced video image recognition using motion sensors
EP3488618B1 (fr) Services de diffusion en continu de vidéo en direct avec relecture de faits saillants en fonction d'un apprentissage par machine
US10157638B2 (en) Collage of interesting moments in a video
CN106162223B (zh) 一种新闻视频切分方法和装置
US20180227482A1 (en) Scene-aware selection of filters and effects for visual digital media content
US20160225410A1 (en) Action camera content management system
CN107436921B (zh) 视频数据处理方法、装置、设备及存储介质
US8897603B2 (en) Image processing apparatus that selects a plurality of video frames and creates an image based on a plurality of images extracted and selected from the frames
CN105262942B (zh) 分布式自动影像和视频处理
US10599145B2 (en) Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle
CN112312142B (zh) 视频播放控制方法、装置和计算机可读存储介质
CN104123396A (zh) 一种基于云电视的足球视频摘要生成方法及装置
WO2021031733A1 (fr) Procédé de génération d'effet spécial vidéo, et terminal
CN110612721A (zh) 视频处理方法及终端设备
US10958837B2 (en) Systems and methods for determining preferences for capture settings of an image capturing device
CN104660948A (zh) 一种视频录制方法和装置
CN105872601A (zh) 视频播放方法、装置及***
CN111241872A (zh) 视频图像遮挡方法及装置
CN105872537A (zh) 视频播放方法、装置及***
CN108540817B (zh) 视频数据处理方法、装置、服务器及计算机可读存储介质
CN112287771A (zh) 用于检测视频事件的方法、装置、服务器和介质
US10395119B1 (en) Systems and methods for determining activities performed during video capture
KR20160025474A (ko) 판정 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20896610

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09-03-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20896610

Country of ref document: EP

Kind code of ref document: A1