WO2021237708A1 - 切割方法、分配方法、介质、服务器、*** - Google Patents
切割方法、分配方法、介质、服务器、*** Download PDFInfo
- Publication number
- WO2021237708A1 WO2021237708A1 PCT/CN2020/093395 CN2020093395W WO2021237708A1 WO 2021237708 A1 WO2021237708 A1 WO 2021237708A1 CN 2020093395 W CN2020093395 W CN 2020093395W WO 2021237708 A1 WO2021237708 A1 WO 2021237708A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- video
- image
- frame
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003672 processing method Methods 0.000 claims abstract description 46
- 238000012545 processing Methods 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims description 32
- 230000003993 interaction Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
- G06F3/1446—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0435—Change or adaptation of the frame rate of the video stream
Definitions
- the present disclosure relates to the field of display technology, in particular to a video processing method, a cutting task distribution method, a computer-readable storage medium, an execution server, a scheduling server, and a video processing system.
- a splicing screen In some occasions that need to be displayed on a large screen, a splicing screen is needed.
- the splicing screen includes multiple display terminals. In order to display the same screen, different display terminals in the same splicing screen need to display different parts of the screen.
- the purpose of the present disclosure is to provide a video processing method, a cutting task distribution method, a computer-readable storage medium and an execution server, a scheduling server, and a video processing system.
- a video processing method including:
- each of the sub-videos includes M frames of sub-images, and the duration of one frame of the multiple sub-videos is the same as each other, wherein the i-th sub-video of all the sub-videos
- the frame sub-images are assembled into the i-th frame image of the initial video, where i is a natural number greater than or equal to 1.
- the step of dividing each frame of the initial video into multiple sub-images includes:
- the segmentation information includes the number of sub-images into which each frame of the image is segmented and the layout information of multiple sub-images into which one frame of image is segmented.
- the steps for segmenting each frame of image include:
- Each frame of the initial video is divided according to the size of each sub-image and the layout information of the multiple sub-images divided into one frame of image.
- the step of segmenting each frame image of the initial video according to the size of the sub-image and the layout information of each of the sub-images includes:
- the information of the pixels belonging to each sub-image is determined according to the coordinates of the reference point of each sub-image in the corresponding image and the size of the sub-image, so as to obtain each sub-image.
- each frame image of the initial video is a rectangular image
- the number of sub-images into which each frame of the image is divided is a ⁇ b
- each sub-image is a rectangular image
- the sub-image The reference point of is the vertex of the upper left corner of the sub-image.
- the video processing method further includes performing after forming multiple sub-videos using all the obtained sub-images:
- the video processing method further includes performing after forming multiple sub-videos using all the obtained sub-images:
- the video cutting method further includes performing before the step of dividing each frame of the initial video into multiple sub-images:
- the initial video has a target format
- the step of acquiring the initial video according to the task address includes:
- format conversion is performed on the source video to obtain the initial video.
- it further includes performing after the step of forming multiple sub-videos by using all the obtained sub-images:
- the task list is issued to the multiple display terminals of the splicing screen.
- the step of determining the playback task according to the identification information of each sub-video and multiple display terminals in the splicing screen includes:
- the play task is generated according to each of the sub videos and the identification information of the display terminal used as the master and the identification information of the display terminal used as the slave.
- a cutting task allocation method including:
- the cutting task is allocated to a server that meets a predetermined condition, so that the server that receives the cutting task executes the above-mentioned video processing method provided in the present disclosure.
- the predetermined condition is:
- the number of tasks executed in multiple servers does not exceed the predetermined number of servers.
- the cutting task distribution method further includes the step of generating at least one cutting task according to the received source video and the step of allocating the cutting task to servers meeting predetermined conditions according to the status of each server. :
- Sort N servers according to the number of tasks performed by each server from less to more;
- the step of allocating the cutting task to servers meeting predetermined conditions according to the status of each server includes:
- the cutting task distribution method further includes:
- the mapping relationship between the cutting task and the server executing the cutting task is stored.
- a computer-readable storage medium is provided, the computer-readable storage medium is used to store an executable program, and when the executable program is invoked, one of the following methods can be implemented:
- an execution server includes:
- a first storage module on which a first executable program is stored
- One or more first processors the one or more first processors call the first executable program to implement the video processing method
- a first I/O interface where the first I/O interface is connected between the first processor and the first storage module to implement information interaction between the first processor and the first storage module.
- a dispatch server includes:
- a second storage module on which a second executable program is stored
- One or more second processors the one or more second processors call the second executable program to implement the cutting task allocation method
- a second I/O interface where the second I/O interface is connected between the second processor and the second storage module to implement information interaction between the second processor and the second storage module.
- a video processing system includes the execution server and the scheduling server.
- the video processing system further includes a splicing screen, the splicing screen includes a plurality of display terminals, and the plurality of display terminals are used to display each of the sub-videos respectively.
- FIG. 1 is a schematic diagram of an implementation manner of a video processing method provided in the first aspect of the present disclosure
- Shown in Figure 2a is a schematic diagram of the first frame of the initial video
- FIG. 2b Shown in FIG. 2b is a schematic diagram of the second frame of the initial video
- Figure 3a shows a schematic diagram of the first frame image of the initial video being divided into four sub-images
- Figure 3b shows a schematic diagram of the second frame image of the initial video being divided into four sub-images
- FIG. 4 is a schematic flowchart of an embodiment of step S110
- Figure 5 is a schematic diagram of an array displayed on the user side
- FIG. 6 is a schematic flowchart of an implementation manner of step S112;
- FIG. 7 is a schematic flowchart of an embodiment of step S112;
- FIG. 8 is a schematic flowchart of another implementation manner of step S112;
- FIG. 9 is a schematic diagram of two different initial videos displayed on different display terminals in the splicing screen.
- FIG. 10 shows a schematic diagram of another implementation manner of the video processing method provided by the present disclosure.
- FIG. 11 shows a schematic flowchart of step S105
- FIG. 12 shows a schematic flowchart of step S105b
- FIG. 13 shows a schematic diagram of still another embodiment of the video processing method provided by the present disclosure.
- FIG. 14 is a schematic flowchart of an embodiment of step S150;
- FIG. 15 is a flowchart of an embodiment of the cutting task distribution method provided by the present disclosure.
- FIG. 16 is a flowchart of another embodiment of the cutting task distribution method provided by the present disclosure.
- FIG. 17 is a flowchart of still another embodiment of the cutting task distribution method provided by the present disclosure.
- FIG. 18 is a flowchart of a specific implementation of the cutting task distribution method provided by the present disclosure.
- FIG. 19 is a flowchart of the execution server provided by the present disclosure when performing a cutting task
- FIG. 20 is a schematic diagram of modules of the video processing system provided by the present disclosure.
- the video processing method includes:
- each frame of the initial video is divided into multiple sub-images, the initial video includes M frames of images, where M is a positive integer greater than 1;
- step S120 all the obtained sub-images are used to form a plurality of sub-videos, each of the sub-videos includes M frames of sub-images, and the duration of one frame of the plurality of sub-videos is the same as each other, wherein all of the sub-videos
- the i-th frame sub-image of the sub-video is assembled into the i-th frame image of the initial video.
- the relative position of the i-th frame sub-image in the i-th frame image of the initial video is compared with other frame sub-images.
- the relative positions of the images in the corresponding frame images of the initial video are the same, i is a variable, i is a natural number, and i is 1 to M in sequence.
- i is 1 to M in turn means that i is 1, 2, 3, 4, 5...M respectively.
- Each frame of the initial video is a rectangular image, and each frame is divided into four sub-images with two rows and two columns.
- the sub-images obtained by the division may form 4 sub-videos.
- these 4 sub-videos are respectively referred to as the first sub-video, the second sub-video, the third sub-video, and the fourth sub-video.
- the first sub-video the first frame of sub-image is the first row and first column of the four sub-images obtained by dividing the first frame of the initial video; the second frame of sub-image is the second frame of the initial video The sub-images in the first row and first column of the four sub-images obtained; and so on.
- step S120 of the present disclosure The multiple sub-videos obtained in step S120 of the present disclosure are delivered to each display terminal in the splicing screen, and each display terminal displays one sub-video, and finally the splicing screen displays the initial video.
- the i-th frame sub-image of each sub-video is a part of the i-th frame image of the initial video, and the time axis of each sub-video is the same as the time axis of the initial video. Therefore, each display terminal using the splicing screen displays Each sub-video is visually equivalent to playing the initial video.
- the "splicing screen” here refers to a display terminal group formed by splicing multiple display terminals, and multiple display terminals in the splicing screen can be used to display the same screen.
- the video processing method provided by the present disclosure will be explained below in conjunction with FIG. 2a, FIG. 2b, FIG. 3a, and FIG. 3b.
- the initial video includes M frames of images.
- Figure 2a shows a schematic diagram of the first frame of the initial video
- Figure 2b shows a schematic diagram of the second frame of the initial video.
- step S110 the first frame image of the initial video is divided into four sub-images as shown in FIG. 3a, and the second frame image of the initial video is divided into four sub-images as shown in FIG. 3b , And so on, until the M frames of the initial video are divided into four sub-images.
- step S120 four sub-videos are formed using each sub-image obtained from each frame image.
- the time axis of the four sub-videos is the same, and all of them are the same as the time axis of the initial video.
- the duration of one frame of the initial video is T ms.
- t T.
- the present disclosure is not limited to this, and the specific value of t can be set according to playback requirements, as long as the duration of one frame in each sub-video is the same. Therefore, when each sub-video is played at the same time, the frames of each sub-video can be synchronized, and the initial video can be displayed in a spliced screen manner.
- the number of sub-videos into which the initial video is divided is not specifically limited. As an optional implementation manner, it may be determined according to the number of display terminals in the splicing screen. For example, when the splicing screen includes four display terminals, the initial video is divided into four sub-videos. That is, each frame of the initial video is divided into four sub-images.
- the video processing method provided by the present disclosure can be executed by a server arranged in the cloud.
- a split request can be generated according to the actual situation of the splicing screen, and then the split request can be uploaded to the cloud server.
- the segmentation request may include segmentation information corresponding to the segmentation method of each frame of the initial video (for example, the number of sub-images into which each frame of image is segmented, the shape of each sub-image, the size of each sub-image, etc.) .
- step S110 may include:
- step S111 a segmentation request is received, where the segmentation request includes segmentation information for each frame of image;
- step S112 each frame of image of the initial video is segmented according to the segmentation request.
- the sender of the segmentation request in step S111 is not particularly limited.
- the split request may be sent (or referred to as upload) by the administrator of the splicing screen via the Internet to the server that executes the video processing method.
- the segmentation information of each frame of image in the segmentation request may include the number of sub-images into which each frame of image is divided and the layout information of multiple sub-images into which one frame of image is divided.
- step S112 may include:
- step S112a the size of each of the sub-images is determined according to the segmentation information
- each frame of the initial video is segmented according to the size of the sub-image and the layout information of each of the sub-images.
- the size of each frame of image of the initial video is known, and the size of each sub-image can be determined according to the number of sub-images corresponding to each frame of image.
- step S112b may include:
- step S112b determining the coordinates of the reference point of each sub-image corresponding to the image layout information in accordance with the size of the respective sub-images and sub-images;
- step 2 S112b the reference point according to each of the sub-image size and coordinates of the determined sub-image information of a pixel belonging to each of the sub-images in the respective images, each said sub-image to obtain.
- the reference point may be the first point displayed when the sub-image is displayed.
- the reference point of the sub-image can be the top left corner of the sub-image.
- the information of the pixels belonging to each sub-image can be determined according to the size of the sub-image (the The information may include position information of the pixel in the image, and grayscale information of the pixel).
- the information of each sub-image can be output.
- the shape of each frame of sub-image of the sub-video divided into the initial video may be determined by the outline of each display terminal in the splicing screen.
- the shape of the splicing screen is a rectangle, and the splicing screen includes rectangular display terminals arranged in 2 rows and 4 columns, and the shape of the sub-image may also be a rectangle.
- the reference point of the sub-image is the vertex at the upper left corner of the sub-image.
- each frame of image of the initial video is a rectangular image
- the number of sub-images into which each frame of the image is divided is a ⁇ b
- each sub-image is For a rectangular image
- the reference point of the sub-image is the vertex at the upper left corner of the sub-image, where a and b are both positive integers.
- the user terminal may display the thumbnail of the split mode.
- an array diagram can be displayed on the user terminal, and the segmentation request can be generated by the operator selecting the number of rows and the number of columns.
- the terminal on the user side can display the array diagram, and the operator can select the number of rows and columns that need to be divided into each frame of image by selecting it with the mouse, and the layout of each sub-image can be clearly determined through the array diagram.
- the layout information mentioned here refers to the relative coordinate information of multiple sub-images divided by one image, and the relative coordinate information of each sub-image in the layout and the position coordinates of each sub-image in the corresponding image. Correspondence.
- step S112 includes:
- step S112e the relative coordinate information of each sub-image in the layout is converted into position coordinates of each sub-image in the corresponding image;
- step S112f the image is segmented according to the position coordinates of each of the sub-images in the corresponding image.
- the user end provides a 2 ⁇ 4 segmentation request.
- A1 to A4 and B1 to B4 represent 8 display terminals of the same size, with no padding.
- Part of the first initial video is displayed, which is cut into 2 ⁇ 3 sub-videos, and the part filled with diagonal lines displays the second initial video, which is cut into 2 ⁇ 1 sub-videos.
- each minimum unit ie, each sub-video
- the coordinate of the sub-video corresponding to the display terminal A1 is (0,0)
- the coordinate of the sub-video corresponding to the display terminal B4 is ( 1, 3).
- the division mode and the initial video can be mapped one by one, and uploaded to the server that executes the video processing method.
- step S112d to step S112f are executed.
- videos can be divided for different splicing screens, and different display purposes can be achieved.
- the display purpose is to display video on a spliced screen in which multiple display terminals are arranged in 2 rows and 3 columns.
- the information carried in the segmentation request may include segmenting the initial video into 2 ⁇ 3 sub-videos, where each frame of the initial video is a rectangular image.
- the video cutting method further includes performing after step S120:
- step S130 an address is allocated to each of the sub-videos.
- the splicing screen can download the corresponding sub-video according to the address of each said sub-video.
- each sub-video may be downloaded to a local storage device first, and then each sub-video may be distributed to a corresponding display terminal.
- each sub-video can be directly downloaded to the corresponding display terminal.
- each sub-video and each display terminal there is no particular limitation on how to determine the correspondence between each sub-video and each display terminal. For example, after downloading each sub-video to the local storage device, preview each sub-video first, and then determine the correspondence between each sub-video and each display terminal according to the preview result.
- the video processing method may further include the following steps after step S120:
- step S140 a mapping relationship between each of the sub-videos and each display terminal that plays each sub-video is determined.
- step S130 may be executed first, and then step S140 may be executed, or step S140 may be executed first, and then step S130 may be executed, or step S130 and step S140 may be executed simultaneously.
- the initial video may be a video resource stored locally on the server that executes the video processing method.
- the segmentation request uploaded by the user includes identification information (for example, a video number) of the initial video to be segmented. After receiving the segmentation request, the initial video is first determined, and then step S110 is performed.
- the initial video may also be a video resource stored in another location.
- the video processing method may further include performing before step S110:
- step S100 the cutting task address is acquired
- step S105 the initial video is acquired according to the cutting task address.
- step S105 may include:
- step S105a the source video at the task address is acquired
- step S105b when the format of the source video is inconsistent with the target format, format conversion is performed on the source video to obtain the initial video.
- video formats include mp4, avi, wmv, rmbv and other formats.
- target format is mp4 format and the format of the source video is non-mp4 format
- the source video can be transcoded into mp4 format.
- the initial video may be stored in the cutting task address.
- step S105b may further include:
- step S105b 1 the source video is stored locally
- step S105b 2 a transcoding task is generated
- step S105b the program source using FFMPEG video transcoding, mp4 format video output;
- step S105b output the transcoding progress
- step S105b5 the address of the transcoded file is recorded in the database
- each sub-video obtained by cutting the initial video needs to be delivered to each display terminal of the splicing screen.
- the mapping relationship between the sub video and the display terminal can be established.
- the display terminal displays the corresponding sub video.
- the video processing method may further include performing after step S120:
- step S150 a play task is determined according to each sub-video
- step S160 a task list is generated according to the playback task
- step S170 the task list is issued to multiple display terminals of the splicing screen.
- the display terminal After receiving the task list, the display terminal can display the sub-videos defined in the task list according to the task list.
- step S150 may include:
- step S151 the identification information of the display terminal required by the playback task is determined
- step S152 the master in the play task and the slave in the play task are determined according to the identification information of the display terminal required by the play task;
- step S153 the play task is generated according to the identification information of each of the sub-videos and the display terminal used as the master, and the identification information of the display terminal used as the slave.
- the display terminal used as the master can control the display terminal used as the slave to display corresponding playback tasks.
- the cutting task distribution method includes:
- step S210 generate at least one cutting task according to the received source video
- step S220 according to the status of each server, the cutting task is allocated to a server that meets a predetermined condition, so that the server that receives the cutting task executes the above-mentioned video processing method provided in the present disclosure.
- multiple distributed servers set up in the cloud can all execute the video processing method provided in the first aspect of the present disclosure.
- the status of each server capable of executing the video processing method can be determined first (the status includes the number of tasks currently performed by the server).
- the predetermined conditions are not particularly limited.
- the predetermined condition is:
- the number of tasks executed in multiple servers does not exceed the predetermined number of servers.
- the predetermined number can be determined according to the processing capacity of each server.
- the predetermined number may be two.
- the cutting task distribution method further includes performing between step S210 and step S220:
- step S215 the N servers are sorted according to the number of tasks performed by each server from less to more.
- the predetermined condition includes: among the N servers, ranking in the top L positions, where L and N are both positive integers, and L ⁇ N.
- L may be less than N/2.
- the cutting task distribution method further includes:
- step S230 the mapping relationship between the cutting task and the server executing the cutting task is stored.
- step S210 a cutting task is generated according to the received source video
- Step S215 is specifically executed as: obtaining the configuration information of each server that can perform cutting tasks, the IP address of each server, and the number of tasks that each execution server is processing. Server sorting;
- Step S220 is specifically executed as: Prioritize the 2 ⁇ 3 cutting task to the server with a small number of execution tasks;
- Step S230 is specifically executed as: the data of the cutting task (in the present disclosure, the task of dividing the initial video into 6 sub-videos can be saved as one task, or the task of dividing the initial video into 6 sub-videos can be saved as Multiple tasks) and the IP address of the server corresponding to the execution of the task data are stored in the database in the form of a data task table.
- a computer-readable storage medium is provided, the computer-readable storage medium is used to store an executable program, and when the executable program is invoked, one of the following methods can be implemented:
- Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
- the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
- Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
- a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .
- an execution server includes:
- a first storage module on which a first executable program is stored
- One or more first processors call the first executable program to implement the video processing method provided in the first aspect of the present disclosure
- a first I/O interface where the first I/O interface is connected between the first processor and the first storage module to implement information interaction between the first processor and the first storage module.
- the first processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.; the first storage module is a device with data storage capabilities, including but not limited to random access memory (RAM, more Specifically, such as SDRAM, DDR, etc.), read-only memory (ROM), charged erasable programmable read-only memory (EEPROM), flash memory (FLASH).
- RAM random access memory
- ROM read-only memory
- EEPROM charged erasable programmable read-only memory
- FLASH flash memory
- the first I/O interface is connected between the first processor and the first storage module, and can realize the information interaction between the first processor and the first storage module, which includes but is not limited to a data bus (Bus) and the like.
- a data bus Bus
- the first processor, the first storage module, and the first I/O interface are connected to each other through a bus, and further connected to other components of the display terminal.
- a dispatch server includes:
- a second storage module on which a second executable program is stored
- One or more second processors the one or more second processors call the second executable program to implement the cutting task distribution method provided in the present disclosure
- a second I/O interface where the second I/O interface is connected between the second processor and the second storage module to implement information interaction between the second processor and the second storage module.
- the second processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
- the first storage module is a device with data storage capabilities, including but not limited to random access memory (RAM, more Specifically, such as SDRAM, DDR, etc.), read-only memory (ROM), charged erasable programmable read-only memory (EEPROM), flash memory (FLASH).
- RAM random access memory
- ROM read-only memory
- EEPROM charged erasable programmable read-only memory
- FLASH flash memory
- the second I/O interface is connected between the second processor and the second storage module, and can realize the information interaction between the second processor and the second storage module, which includes but is not limited to a data bus (Bus) and the like.
- a data bus Bus
- the second processor, the second storage module, and the second I/O interface are connected to each other through a bus, and further connected to other components of the display terminal.
- the video processing system includes the foregoing execution server 100 and the foregoing scheduling server 200.
- the execution server 100 and the scheduling server 200 may be deployed at the same place or at different locations.
- both the execution server 100 and the scheduling server 200 are cloud servers.
- the scheduling server 200 is used to allocate cutting tasks to each execution server. In the following, an implementation manner of a specific process of the execution server 100 executing the cutting task assigned by the scheduling server 200 will be described in detail in conjunction with FIG. 19:
- the execution server queries the task data table generated by the dispatch server every 2 seconds;
- Use ffmpeg software to load the initial video to be cut including: calculating the width and height of each frame of sub-image in the sub-video, and after determining that each frame of image in the initial video segmentation is divided into 2 ⁇ 3 sub-images, calculate each sub-image The coordinates of the upper left corner of the image, according to the coordinates of the upper left corner of each sub-image, cut out the pixel data of the sub-picture that meets the above-mentioned width and height, and output the pixel data as the sub-image;
- the video processing system further includes a splicing screen 300, the splicing screen includes a plurality of display terminals, and the plurality of display terminals are used to display each of the sub-videos respectively.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims (21)
- 一种应用于拼接屏显示的视频处理方法,包括:将初始视频的每一帧图像都分割为多个子图像,所述初始视频包括M帧图像,其中,M为大于1的正整数;利用获得的所有子图像形成多个子视频,每个所述子视频均包括M帧子图像,且多个所述子视频的一帧持续的时间互相相同,其中,所有所述子视频的第i帧子图像拼成所述初始视频第i帧图像,并且,对于任意一个子视频而言,第i帧子图像在所述初始视频的第i帧图像中的相对位置与其他帧子图像在所述初始视频的相应帧图像中的相对位置相同,i为变量,且i为自然数,i依次为1至M。
- 根据权利要求1所述的视频处理方法,其中,将初始视频的每一帧图像都分割为多个子图像的步骤包括:接收分割请求,其中,所述分割请求包括对每一帧图像的分割信息;根据所述分割请求对所述初始视频的每一帧图像进行分割。
- 根据权利要求2所述的视频处理方法,其中,所述分割信息包括每一帧所述图像被分割为的子图像的个数和一帧图像所分割成的多个子图像的布局信息,根据所述分割请求对所述初始视频的每一帧图像进行分割的步骤包括:根据所述分割信息确定各个所述子图像的尺寸;根据各个所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割。
- 根据权利要求3所述的视频处理方法,其中,根据所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割的步骤包括:根据所述子图像的尺寸和各个所述子图像的布局信息确定各个所述子图像的基准点在相应的图像中的坐标;根据各个所述子图像的基准点在相应的图像中的坐标和所述子图像的尺寸确定属于各个所述子图像的像素的信息,以获得各个所述子图像。
- 根据权利要求4所述的视频处理方法,其中,所述初始视频的每一帧图像均为矩形图像,每个子图像均为矩形图像,所述子图像的所述基准点为所述子图像的左上角的顶点。
- 根据权利要求5所述的视频处理方法,其中,所述分割信息包括每一帧所述图像被分割为a行b列子图像,其中,a、b均为正整数。
- 根据权利要求1至6中任意一项所述的视频处理方法,其中,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:为各个所述子视频分配地址。
- 根据权利要求7所述的视频处理方法,其中,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:确定各个所述子视频与播放各个子视频的各个显示终端之间的映射关系。
- 根据权利要求1至6中任意一项所述的视频处理方法,其中,所述视频切割方法还包括在将初始视频的每一帧图像都分割为多个子图像的步骤之前进行的:获取切割任务地址;根据所述切割任务地址获取所述初始视频。
- 根据权利要求9所述的视频处理方法,其中,所述初始视频具有目 标格式,根据所述任务地址获取所述初始视频的步骤包括:获取所述任务地址处的源视频;当所述源视频的格式与目标格式不一致时,对所述源视频进行格式转换,以获得所述初始视频。
- 根据权利要求1至6中任意一项所述的视频处理方法,其中,还包括在利用获得的所有子图像形成多个子视频的步骤之后进行的:根据各个子视频确定播放任务;根据所述播放任务生成任务单;将所述任务单下发至所述拼接屏的多个显示终端。
- 根据权利要求11所述的视频处理方法,其中,根据各个子视频以及拼接屏中的多个显示终端的标识信息确定播放任务的步骤包括:确定所述播放任务所需要的显示终端的标识信息;根据所述播放任务所需要的显示终端的标识信息确定所述播放任务中的主机和所述播放任务中的从机;根据各个所述子视频以及用作主机的显示终端的标识信息、用作从机的显示终端的标识信息生成所述播放任务。
- 一种切割任务分配方法,包括:根据接收到的源视频生成至少一个切割任务;根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器,以使得接收到所述切割任务的服务器执行权利要求1至12中任意一项所述的视频处理方法。
- 根据权利要求13所述的切割任务分配方法,其中,所述预定条件为:多个服务器中执行任务的数量不超过预定数量的服务器。
- 根据权利要求13所述的切割任务分配方法,其中,所述切割任务分配方法还包括在根据接收到的源视频生成至少一个切割任务的步骤以及根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤之间进行的:根据各个服务器所执行的任务数量从少到多,对N个服务器进行排序;根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤包括:依次将生成的切割任务分别发送至排在前L位的服务器,其中,L与生成的切割任务的数量相同,且L<N。
- 根据权利要求13至15中任意一项所述的切割任务分配方法,其中,所述切割任务分配方法还包括:存储所述切割任务、以及执行所述切割任务的服务器之间的映射关系。
- 一种计算机可读存储介质,所述计算机可读存储介质用于存储可执行程序,当所述可执行程序被调用时能够实现以下方法之一:权利要求1至12中任意一项所述的视频切割方法;权利要求13至16中任意一项所述的切割任务分配方法。
- 一种执行服务器,所述执行服务器包括:第一存储模块,其上存储有第一可执行程序;一个或多个第一处理器,所述一个或多个第一处理器调用所述第一可执行程序,以实现权利要求1至12中任意一项所述的视频处理方法;第一I/O接口,所述第一I/O接口连接在第一处理器与第一存储模块间,以实现第一处理器与第一存储模块的信息交互。
- 一种调度服务器,所述调度服务器包括:第二存储模块,其上存储有第二可执行程序;一个或多个第二处理器,所述一个或多个第二处理器调用所述第二可执行程序,以实现权利要求13至16中任意一项所述的切割任务分配方法;第二I/O接口,所述第二I/O接口连接在第二处理器与第二存储模块间,以实现第二处理器与第二存储模块的信息交互。
- 一种视频处理***,所述视频处理***包括权利要求18所述的执行服务器和权利要求19所述的调度服务器。
- 根据权利要求20所述的视频处理***,其中,所述视频处理***还包括拼接屏,所述拼接屏包括多个显示终端,多个所述显示终端用于分别显示各个所述子视频。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080000852.4A CN114072760B (zh) | 2020-05-29 | 2020-05-29 | 切割方法、分配方法、介质、服务器、*** |
US17/309,612 US11995371B2 (en) | 2020-05-29 | 2020-05-29 | Dividing method, distribution method, medium, server, system |
PCT/CN2020/093395 WO2021237708A1 (zh) | 2020-05-29 | 2020-05-29 | 切割方法、分配方法、介质、服务器、*** |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093395 WO2021237708A1 (zh) | 2020-05-29 | 2020-05-29 | 切割方法、分配方法、介质、服务器、*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021237708A1 true WO2021237708A1 (zh) | 2021-12-02 |
Family
ID=78745467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093395 WO2021237708A1 (zh) | 2020-05-29 | 2020-05-29 | 切割方法、分配方法、介质、服务器、*** |
Country Status (3)
Country | Link |
---|---|
US (1) | US11995371B2 (zh) |
CN (1) | CN114072760B (zh) |
WO (1) | WO2021237708A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116437028B (zh) * | 2023-06-14 | 2023-09-08 | 深圳市视景达科技有限公司 | 一种视频显示方法及*** |
CN117173161B (zh) * | 2023-10-30 | 2024-02-23 | 杭州海康威视数字技术股份有限公司 | 内容安全检测方法、装置、设备及*** |
CN117931458B (zh) * | 2024-03-21 | 2024-06-25 | 北京壁仞科技开发有限公司 | 一种推理服务调度方法、装置、处理器及芯片 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130122960A1 (en) * | 2011-11-16 | 2013-05-16 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN104657101A (zh) * | 2015-02-12 | 2015-05-27 | 武汉新蜂乐众网络技术有限公司 | 一种图像拼接显示方法及*** |
CN105739935A (zh) * | 2016-01-22 | 2016-07-06 | 厦门美图移动科技有限公司 | 一种多终端联合显示方法、装置及*** |
CN108093205A (zh) * | 2016-11-23 | 2018-05-29 | 杭州海康威视数字技术股份有限公司 | 一种跨屏同步显示方法及*** |
CN109213464A (zh) * | 2018-09-26 | 2019-01-15 | 永州市金蚂蚁新能源机械有限公司 | 一种图像拼接显示方法及*** |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2617119A1 (en) * | 2008-01-08 | 2009-07-08 | Pci Geomatics Enterprises Inc. | Service oriented architecture for earth observation image processing |
CN103838779B (zh) * | 2012-11-27 | 2019-02-05 | 深圳市腾讯计算机***有限公司 | 复用空闲计算资源的云转码方法及***、分布式文件装置 |
CN103606158A (zh) | 2013-11-29 | 2014-02-26 | 深圳市龙视传媒有限公司 | 一种视频剪切的预处理方法及终端 |
US9922394B2 (en) * | 2014-12-05 | 2018-03-20 | Samsung Electronics Co., Ltd. | Display apparatus and method for displaying split screens thereof |
US10607571B2 (en) * | 2017-08-14 | 2020-03-31 | Thomas Frederick Utsch | Method and system for the distribution of synchronized video to an array of randomly positioned display devices acting as one aggregated display device |
CN106373493A (zh) * | 2016-09-27 | 2017-02-01 | 京东方科技集团股份有限公司 | 一种拼接屏、拼接屏的驱动方法、装置及显示设备 |
CN107229676A (zh) | 2017-05-02 | 2017-10-03 | 国网山东省电力公司 | 基于大数据的分布式视频切割模型及应用 |
CN109495697A (zh) * | 2017-09-11 | 2019-03-19 | 广州彩熠灯光有限公司 | 基于视频切割的多屏幕扩展方法、***、存储介质及终端 |
-
2020
- 2020-05-29 CN CN202080000852.4A patent/CN114072760B/zh active Active
- 2020-05-29 US US17/309,612 patent/US11995371B2/en active Active
- 2020-05-29 WO PCT/CN2020/093395 patent/WO2021237708A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130122960A1 (en) * | 2011-11-16 | 2013-05-16 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN104657101A (zh) * | 2015-02-12 | 2015-05-27 | 武汉新蜂乐众网络技术有限公司 | 一种图像拼接显示方法及*** |
CN105739935A (zh) * | 2016-01-22 | 2016-07-06 | 厦门美图移动科技有限公司 | 一种多终端联合显示方法、装置及*** |
CN108093205A (zh) * | 2016-11-23 | 2018-05-29 | 杭州海康威视数字技术股份有限公司 | 一种跨屏同步显示方法及*** |
CN109213464A (zh) * | 2018-09-26 | 2019-01-15 | 永州市金蚂蚁新能源机械有限公司 | 一种图像拼接显示方法及*** |
Also Published As
Publication number | Publication date |
---|---|
US11995371B2 (en) | 2024-05-28 |
US20220308821A1 (en) | 2022-09-29 |
CN114072760B (zh) | 2024-06-25 |
CN114072760A (zh) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021237708A1 (zh) | 切割方法、分配方法、介质、服务器、*** | |
RU2639651C2 (ru) | Идентификация изображения и организация согласно макету без вмешательства пользователя | |
CN108924582B (zh) | 视频录制方法、计算机可读存储介质及录播*** | |
CN106572139B (zh) | 多终端控制方法、终端、服务器和*** | |
CN110149518B (zh) | 媒体数据的处理方法、***、装置、设备以及存储介质 | |
CN111107386A (zh) | 直播视频的回看方法、装置、电子设备、***及存储介质 | |
US11350151B2 (en) | Methods, systems and devices that enable a user of a mobile phone to select what content is displayed on a screen of a consumer electronic device on display | |
CN112153459A (zh) | 用于投屏显示的方法和装置 | |
WO2018120519A1 (zh) | 图像处理的方法和装置 | |
CN114816308B (zh) | 信息分区显示方法及相关设备 | |
CN109218817B (zh) | 一种显示虚拟礼物提示消息的方法和装置 | |
US10467279B2 (en) | Selecting digital content for inclusion in media presentations | |
JP7471510B2 (ja) | ピクチャのビデオへの変換の方法、装置、機器および記憶媒体 | |
CN109660852B (zh) | 录制视频发布前的视频预览方法、存储介质、设备及*** | |
CN111064700B (zh) | 云游戏的下载方法、装置及*** | |
CN112714341B (zh) | 信息获取方法、云化机顶盒***、实体机顶盒及存储介质 | |
CN107027056B (zh) | 一种桌面配置方法、服务器及客户端 | |
EP4089533A2 (en) | Pooling user interface engines for cloud ui rendering | |
JP2024517702A (ja) | シングルストリームを利用して関心領域の高画質映像を提供する方法、コンピュータ装置、およびコンピュータプログラム | |
CN113824988B (zh) | 一种适配不同场景的点播方法及终端 | |
CN110337043A (zh) | 电视的视频播放方法、装置及存储介质 | |
US11838593B2 (en) | Multi-mode selectable media playback | |
CN113099247B (zh) | 虚拟资源处理方法、装置、服务器、存储介质及程序产品 | |
CN107566904A (zh) | 一种资源数据更新方法及机顶盒设备 | |
CN115278278B (zh) | 一种页面显示方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937581 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937581 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937581 Country of ref document: EP Kind code of ref document: A1 |