WO2021237708A1 - 切割方法、分配方法、介质、服务器、*** - Google Patents

切割方法、分配方法、介质、服务器、*** Download PDF

Info

Publication number
WO2021237708A1
WO2021237708A1 PCT/CN2020/093395 CN2020093395W WO2021237708A1 WO 2021237708 A1 WO2021237708 A1 WO 2021237708A1 CN 2020093395 W CN2020093395 W CN 2020093395W WO 2021237708 A1 WO2021237708 A1 WO 2021237708A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
video
image
frame
images
Prior art date
Application number
PCT/CN2020/093395
Other languages
English (en)
French (fr)
Inventor
李鹏
李文娟
邵广玉
李淑一
秦洋
王洪
雷一鸣
贺王强
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202080000852.4A priority Critical patent/CN114072760B/zh
Priority to US17/309,612 priority patent/US11995371B2/en
Priority to PCT/CN2020/093395 priority patent/WO2021237708A1/zh
Publication of WO2021237708A1 publication Critical patent/WO2021237708A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0435Change or adaptation of the frame rate of the video stream

Definitions

  • the present disclosure relates to the field of display technology, in particular to a video processing method, a cutting task distribution method, a computer-readable storage medium, an execution server, a scheduling server, and a video processing system.
  • a splicing screen In some occasions that need to be displayed on a large screen, a splicing screen is needed.
  • the splicing screen includes multiple display terminals. In order to display the same screen, different display terminals in the same splicing screen need to display different parts of the screen.
  • the purpose of the present disclosure is to provide a video processing method, a cutting task distribution method, a computer-readable storage medium and an execution server, a scheduling server, and a video processing system.
  • a video processing method including:
  • each of the sub-videos includes M frames of sub-images, and the duration of one frame of the multiple sub-videos is the same as each other, wherein the i-th sub-video of all the sub-videos
  • the frame sub-images are assembled into the i-th frame image of the initial video, where i is a natural number greater than or equal to 1.
  • the step of dividing each frame of the initial video into multiple sub-images includes:
  • the segmentation information includes the number of sub-images into which each frame of the image is segmented and the layout information of multiple sub-images into which one frame of image is segmented.
  • the steps for segmenting each frame of image include:
  • Each frame of the initial video is divided according to the size of each sub-image and the layout information of the multiple sub-images divided into one frame of image.
  • the step of segmenting each frame image of the initial video according to the size of the sub-image and the layout information of each of the sub-images includes:
  • the information of the pixels belonging to each sub-image is determined according to the coordinates of the reference point of each sub-image in the corresponding image and the size of the sub-image, so as to obtain each sub-image.
  • each frame image of the initial video is a rectangular image
  • the number of sub-images into which each frame of the image is divided is a ⁇ b
  • each sub-image is a rectangular image
  • the sub-image The reference point of is the vertex of the upper left corner of the sub-image.
  • the video processing method further includes performing after forming multiple sub-videos using all the obtained sub-images:
  • the video processing method further includes performing after forming multiple sub-videos using all the obtained sub-images:
  • the video cutting method further includes performing before the step of dividing each frame of the initial video into multiple sub-images:
  • the initial video has a target format
  • the step of acquiring the initial video according to the task address includes:
  • format conversion is performed on the source video to obtain the initial video.
  • it further includes performing after the step of forming multiple sub-videos by using all the obtained sub-images:
  • the task list is issued to the multiple display terminals of the splicing screen.
  • the step of determining the playback task according to the identification information of each sub-video and multiple display terminals in the splicing screen includes:
  • the play task is generated according to each of the sub videos and the identification information of the display terminal used as the master and the identification information of the display terminal used as the slave.
  • a cutting task allocation method including:
  • the cutting task is allocated to a server that meets a predetermined condition, so that the server that receives the cutting task executes the above-mentioned video processing method provided in the present disclosure.
  • the predetermined condition is:
  • the number of tasks executed in multiple servers does not exceed the predetermined number of servers.
  • the cutting task distribution method further includes the step of generating at least one cutting task according to the received source video and the step of allocating the cutting task to servers meeting predetermined conditions according to the status of each server. :
  • Sort N servers according to the number of tasks performed by each server from less to more;
  • the step of allocating the cutting task to servers meeting predetermined conditions according to the status of each server includes:
  • the cutting task distribution method further includes:
  • the mapping relationship between the cutting task and the server executing the cutting task is stored.
  • a computer-readable storage medium is provided, the computer-readable storage medium is used to store an executable program, and when the executable program is invoked, one of the following methods can be implemented:
  • an execution server includes:
  • a first storage module on which a first executable program is stored
  • One or more first processors the one or more first processors call the first executable program to implement the video processing method
  • a first I/O interface where the first I/O interface is connected between the first processor and the first storage module to implement information interaction between the first processor and the first storage module.
  • a dispatch server includes:
  • a second storage module on which a second executable program is stored
  • One or more second processors the one or more second processors call the second executable program to implement the cutting task allocation method
  • a second I/O interface where the second I/O interface is connected between the second processor and the second storage module to implement information interaction between the second processor and the second storage module.
  • a video processing system includes the execution server and the scheduling server.
  • the video processing system further includes a splicing screen, the splicing screen includes a plurality of display terminals, and the plurality of display terminals are used to display each of the sub-videos respectively.
  • FIG. 1 is a schematic diagram of an implementation manner of a video processing method provided in the first aspect of the present disclosure
  • Shown in Figure 2a is a schematic diagram of the first frame of the initial video
  • FIG. 2b Shown in FIG. 2b is a schematic diagram of the second frame of the initial video
  • Figure 3a shows a schematic diagram of the first frame image of the initial video being divided into four sub-images
  • Figure 3b shows a schematic diagram of the second frame image of the initial video being divided into four sub-images
  • FIG. 4 is a schematic flowchart of an embodiment of step S110
  • Figure 5 is a schematic diagram of an array displayed on the user side
  • FIG. 6 is a schematic flowchart of an implementation manner of step S112;
  • FIG. 7 is a schematic flowchart of an embodiment of step S112;
  • FIG. 8 is a schematic flowchart of another implementation manner of step S112;
  • FIG. 9 is a schematic diagram of two different initial videos displayed on different display terminals in the splicing screen.
  • FIG. 10 shows a schematic diagram of another implementation manner of the video processing method provided by the present disclosure.
  • FIG. 11 shows a schematic flowchart of step S105
  • FIG. 12 shows a schematic flowchart of step S105b
  • FIG. 13 shows a schematic diagram of still another embodiment of the video processing method provided by the present disclosure.
  • FIG. 14 is a schematic flowchart of an embodiment of step S150;
  • FIG. 15 is a flowchart of an embodiment of the cutting task distribution method provided by the present disclosure.
  • FIG. 16 is a flowchart of another embodiment of the cutting task distribution method provided by the present disclosure.
  • FIG. 17 is a flowchart of still another embodiment of the cutting task distribution method provided by the present disclosure.
  • FIG. 18 is a flowchart of a specific implementation of the cutting task distribution method provided by the present disclosure.
  • FIG. 19 is a flowchart of the execution server provided by the present disclosure when performing a cutting task
  • FIG. 20 is a schematic diagram of modules of the video processing system provided by the present disclosure.
  • the video processing method includes:
  • each frame of the initial video is divided into multiple sub-images, the initial video includes M frames of images, where M is a positive integer greater than 1;
  • step S120 all the obtained sub-images are used to form a plurality of sub-videos, each of the sub-videos includes M frames of sub-images, and the duration of one frame of the plurality of sub-videos is the same as each other, wherein all of the sub-videos
  • the i-th frame sub-image of the sub-video is assembled into the i-th frame image of the initial video.
  • the relative position of the i-th frame sub-image in the i-th frame image of the initial video is compared with other frame sub-images.
  • the relative positions of the images in the corresponding frame images of the initial video are the same, i is a variable, i is a natural number, and i is 1 to M in sequence.
  • i is 1 to M in turn means that i is 1, 2, 3, 4, 5...M respectively.
  • Each frame of the initial video is a rectangular image, and each frame is divided into four sub-images with two rows and two columns.
  • the sub-images obtained by the division may form 4 sub-videos.
  • these 4 sub-videos are respectively referred to as the first sub-video, the second sub-video, the third sub-video, and the fourth sub-video.
  • the first sub-video the first frame of sub-image is the first row and first column of the four sub-images obtained by dividing the first frame of the initial video; the second frame of sub-image is the second frame of the initial video The sub-images in the first row and first column of the four sub-images obtained; and so on.
  • step S120 of the present disclosure The multiple sub-videos obtained in step S120 of the present disclosure are delivered to each display terminal in the splicing screen, and each display terminal displays one sub-video, and finally the splicing screen displays the initial video.
  • the i-th frame sub-image of each sub-video is a part of the i-th frame image of the initial video, and the time axis of each sub-video is the same as the time axis of the initial video. Therefore, each display terminal using the splicing screen displays Each sub-video is visually equivalent to playing the initial video.
  • the "splicing screen” here refers to a display terminal group formed by splicing multiple display terminals, and multiple display terminals in the splicing screen can be used to display the same screen.
  • the video processing method provided by the present disclosure will be explained below in conjunction with FIG. 2a, FIG. 2b, FIG. 3a, and FIG. 3b.
  • the initial video includes M frames of images.
  • Figure 2a shows a schematic diagram of the first frame of the initial video
  • Figure 2b shows a schematic diagram of the second frame of the initial video.
  • step S110 the first frame image of the initial video is divided into four sub-images as shown in FIG. 3a, and the second frame image of the initial video is divided into four sub-images as shown in FIG. 3b , And so on, until the M frames of the initial video are divided into four sub-images.
  • step S120 four sub-videos are formed using each sub-image obtained from each frame image.
  • the time axis of the four sub-videos is the same, and all of them are the same as the time axis of the initial video.
  • the duration of one frame of the initial video is T ms.
  • t T.
  • the present disclosure is not limited to this, and the specific value of t can be set according to playback requirements, as long as the duration of one frame in each sub-video is the same. Therefore, when each sub-video is played at the same time, the frames of each sub-video can be synchronized, and the initial video can be displayed in a spliced screen manner.
  • the number of sub-videos into which the initial video is divided is not specifically limited. As an optional implementation manner, it may be determined according to the number of display terminals in the splicing screen. For example, when the splicing screen includes four display terminals, the initial video is divided into four sub-videos. That is, each frame of the initial video is divided into four sub-images.
  • the video processing method provided by the present disclosure can be executed by a server arranged in the cloud.
  • a split request can be generated according to the actual situation of the splicing screen, and then the split request can be uploaded to the cloud server.
  • the segmentation request may include segmentation information corresponding to the segmentation method of each frame of the initial video (for example, the number of sub-images into which each frame of image is segmented, the shape of each sub-image, the size of each sub-image, etc.) .
  • step S110 may include:
  • step S111 a segmentation request is received, where the segmentation request includes segmentation information for each frame of image;
  • step S112 each frame of image of the initial video is segmented according to the segmentation request.
  • the sender of the segmentation request in step S111 is not particularly limited.
  • the split request may be sent (or referred to as upload) by the administrator of the splicing screen via the Internet to the server that executes the video processing method.
  • the segmentation information of each frame of image in the segmentation request may include the number of sub-images into which each frame of image is divided and the layout information of multiple sub-images into which one frame of image is divided.
  • step S112 may include:
  • step S112a the size of each of the sub-images is determined according to the segmentation information
  • each frame of the initial video is segmented according to the size of the sub-image and the layout information of each of the sub-images.
  • the size of each frame of image of the initial video is known, and the size of each sub-image can be determined according to the number of sub-images corresponding to each frame of image.
  • step S112b may include:
  • step S112b determining the coordinates of the reference point of each sub-image corresponding to the image layout information in accordance with the size of the respective sub-images and sub-images;
  • step 2 S112b the reference point according to each of the sub-image size and coordinates of the determined sub-image information of a pixel belonging to each of the sub-images in the respective images, each said sub-image to obtain.
  • the reference point may be the first point displayed when the sub-image is displayed.
  • the reference point of the sub-image can be the top left corner of the sub-image.
  • the information of the pixels belonging to each sub-image can be determined according to the size of the sub-image (the The information may include position information of the pixel in the image, and grayscale information of the pixel).
  • the information of each sub-image can be output.
  • the shape of each frame of sub-image of the sub-video divided into the initial video may be determined by the outline of each display terminal in the splicing screen.
  • the shape of the splicing screen is a rectangle, and the splicing screen includes rectangular display terminals arranged in 2 rows and 4 columns, and the shape of the sub-image may also be a rectangle.
  • the reference point of the sub-image is the vertex at the upper left corner of the sub-image.
  • each frame of image of the initial video is a rectangular image
  • the number of sub-images into which each frame of the image is divided is a ⁇ b
  • each sub-image is For a rectangular image
  • the reference point of the sub-image is the vertex at the upper left corner of the sub-image, where a and b are both positive integers.
  • the user terminal may display the thumbnail of the split mode.
  • an array diagram can be displayed on the user terminal, and the segmentation request can be generated by the operator selecting the number of rows and the number of columns.
  • the terminal on the user side can display the array diagram, and the operator can select the number of rows and columns that need to be divided into each frame of image by selecting it with the mouse, and the layout of each sub-image can be clearly determined through the array diagram.
  • the layout information mentioned here refers to the relative coordinate information of multiple sub-images divided by one image, and the relative coordinate information of each sub-image in the layout and the position coordinates of each sub-image in the corresponding image. Correspondence.
  • step S112 includes:
  • step S112e the relative coordinate information of each sub-image in the layout is converted into position coordinates of each sub-image in the corresponding image;
  • step S112f the image is segmented according to the position coordinates of each of the sub-images in the corresponding image.
  • the user end provides a 2 ⁇ 4 segmentation request.
  • A1 to A4 and B1 to B4 represent 8 display terminals of the same size, with no padding.
  • Part of the first initial video is displayed, which is cut into 2 ⁇ 3 sub-videos, and the part filled with diagonal lines displays the second initial video, which is cut into 2 ⁇ 1 sub-videos.
  • each minimum unit ie, each sub-video
  • the coordinate of the sub-video corresponding to the display terminal A1 is (0,0)
  • the coordinate of the sub-video corresponding to the display terminal B4 is ( 1, 3).
  • the division mode and the initial video can be mapped one by one, and uploaded to the server that executes the video processing method.
  • step S112d to step S112f are executed.
  • videos can be divided for different splicing screens, and different display purposes can be achieved.
  • the display purpose is to display video on a spliced screen in which multiple display terminals are arranged in 2 rows and 3 columns.
  • the information carried in the segmentation request may include segmenting the initial video into 2 ⁇ 3 sub-videos, where each frame of the initial video is a rectangular image.
  • the video cutting method further includes performing after step S120:
  • step S130 an address is allocated to each of the sub-videos.
  • the splicing screen can download the corresponding sub-video according to the address of each said sub-video.
  • each sub-video may be downloaded to a local storage device first, and then each sub-video may be distributed to a corresponding display terminal.
  • each sub-video can be directly downloaded to the corresponding display terminal.
  • each sub-video and each display terminal there is no particular limitation on how to determine the correspondence between each sub-video and each display terminal. For example, after downloading each sub-video to the local storage device, preview each sub-video first, and then determine the correspondence between each sub-video and each display terminal according to the preview result.
  • the video processing method may further include the following steps after step S120:
  • step S140 a mapping relationship between each of the sub-videos and each display terminal that plays each sub-video is determined.
  • step S130 may be executed first, and then step S140 may be executed, or step S140 may be executed first, and then step S130 may be executed, or step S130 and step S140 may be executed simultaneously.
  • the initial video may be a video resource stored locally on the server that executes the video processing method.
  • the segmentation request uploaded by the user includes identification information (for example, a video number) of the initial video to be segmented. After receiving the segmentation request, the initial video is first determined, and then step S110 is performed.
  • the initial video may also be a video resource stored in another location.
  • the video processing method may further include performing before step S110:
  • step S100 the cutting task address is acquired
  • step S105 the initial video is acquired according to the cutting task address.
  • step S105 may include:
  • step S105a the source video at the task address is acquired
  • step S105b when the format of the source video is inconsistent with the target format, format conversion is performed on the source video to obtain the initial video.
  • video formats include mp4, avi, wmv, rmbv and other formats.
  • target format is mp4 format and the format of the source video is non-mp4 format
  • the source video can be transcoded into mp4 format.
  • the initial video may be stored in the cutting task address.
  • step S105b may further include:
  • step S105b 1 the source video is stored locally
  • step S105b 2 a transcoding task is generated
  • step S105b the program source using FFMPEG video transcoding, mp4 format video output;
  • step S105b output the transcoding progress
  • step S105b5 the address of the transcoded file is recorded in the database
  • each sub-video obtained by cutting the initial video needs to be delivered to each display terminal of the splicing screen.
  • the mapping relationship between the sub video and the display terminal can be established.
  • the display terminal displays the corresponding sub video.
  • the video processing method may further include performing after step S120:
  • step S150 a play task is determined according to each sub-video
  • step S160 a task list is generated according to the playback task
  • step S170 the task list is issued to multiple display terminals of the splicing screen.
  • the display terminal After receiving the task list, the display terminal can display the sub-videos defined in the task list according to the task list.
  • step S150 may include:
  • step S151 the identification information of the display terminal required by the playback task is determined
  • step S152 the master in the play task and the slave in the play task are determined according to the identification information of the display terminal required by the play task;
  • step S153 the play task is generated according to the identification information of each of the sub-videos and the display terminal used as the master, and the identification information of the display terminal used as the slave.
  • the display terminal used as the master can control the display terminal used as the slave to display corresponding playback tasks.
  • the cutting task distribution method includes:
  • step S210 generate at least one cutting task according to the received source video
  • step S220 according to the status of each server, the cutting task is allocated to a server that meets a predetermined condition, so that the server that receives the cutting task executes the above-mentioned video processing method provided in the present disclosure.
  • multiple distributed servers set up in the cloud can all execute the video processing method provided in the first aspect of the present disclosure.
  • the status of each server capable of executing the video processing method can be determined first (the status includes the number of tasks currently performed by the server).
  • the predetermined conditions are not particularly limited.
  • the predetermined condition is:
  • the number of tasks executed in multiple servers does not exceed the predetermined number of servers.
  • the predetermined number can be determined according to the processing capacity of each server.
  • the predetermined number may be two.
  • the cutting task distribution method further includes performing between step S210 and step S220:
  • step S215 the N servers are sorted according to the number of tasks performed by each server from less to more.
  • the predetermined condition includes: among the N servers, ranking in the top L positions, where L and N are both positive integers, and L ⁇ N.
  • L may be less than N/2.
  • the cutting task distribution method further includes:
  • step S230 the mapping relationship between the cutting task and the server executing the cutting task is stored.
  • step S210 a cutting task is generated according to the received source video
  • Step S215 is specifically executed as: obtaining the configuration information of each server that can perform cutting tasks, the IP address of each server, and the number of tasks that each execution server is processing. Server sorting;
  • Step S220 is specifically executed as: Prioritize the 2 ⁇ 3 cutting task to the server with a small number of execution tasks;
  • Step S230 is specifically executed as: the data of the cutting task (in the present disclosure, the task of dividing the initial video into 6 sub-videos can be saved as one task, or the task of dividing the initial video into 6 sub-videos can be saved as Multiple tasks) and the IP address of the server corresponding to the execution of the task data are stored in the database in the form of a data task table.
  • a computer-readable storage medium is provided, the computer-readable storage medium is used to store an executable program, and when the executable program is invoked, one of the following methods can be implemented:
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .
  • an execution server includes:
  • a first storage module on which a first executable program is stored
  • One or more first processors call the first executable program to implement the video processing method provided in the first aspect of the present disclosure
  • a first I/O interface where the first I/O interface is connected between the first processor and the first storage module to implement information interaction between the first processor and the first storage module.
  • the first processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.; the first storage module is a device with data storage capabilities, including but not limited to random access memory (RAM, more Specifically, such as SDRAM, DDR, etc.), read-only memory (ROM), charged erasable programmable read-only memory (EEPROM), flash memory (FLASH).
  • RAM random access memory
  • ROM read-only memory
  • EEPROM charged erasable programmable read-only memory
  • FLASH flash memory
  • the first I/O interface is connected between the first processor and the first storage module, and can realize the information interaction between the first processor and the first storage module, which includes but is not limited to a data bus (Bus) and the like.
  • a data bus Bus
  • the first processor, the first storage module, and the first I/O interface are connected to each other through a bus, and further connected to other components of the display terminal.
  • a dispatch server includes:
  • a second storage module on which a second executable program is stored
  • One or more second processors the one or more second processors call the second executable program to implement the cutting task distribution method provided in the present disclosure
  • a second I/O interface where the second I/O interface is connected between the second processor and the second storage module to implement information interaction between the second processor and the second storage module.
  • the second processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the first storage module is a device with data storage capabilities, including but not limited to random access memory (RAM, more Specifically, such as SDRAM, DDR, etc.), read-only memory (ROM), charged erasable programmable read-only memory (EEPROM), flash memory (FLASH).
  • RAM random access memory
  • ROM read-only memory
  • EEPROM charged erasable programmable read-only memory
  • FLASH flash memory
  • the second I/O interface is connected between the second processor and the second storage module, and can realize the information interaction between the second processor and the second storage module, which includes but is not limited to a data bus (Bus) and the like.
  • a data bus Bus
  • the second processor, the second storage module, and the second I/O interface are connected to each other through a bus, and further connected to other components of the display terminal.
  • the video processing system includes the foregoing execution server 100 and the foregoing scheduling server 200.
  • the execution server 100 and the scheduling server 200 may be deployed at the same place or at different locations.
  • both the execution server 100 and the scheduling server 200 are cloud servers.
  • the scheduling server 200 is used to allocate cutting tasks to each execution server. In the following, an implementation manner of a specific process of the execution server 100 executing the cutting task assigned by the scheduling server 200 will be described in detail in conjunction with FIG. 19:
  • the execution server queries the task data table generated by the dispatch server every 2 seconds;
  • Use ffmpeg software to load the initial video to be cut including: calculating the width and height of each frame of sub-image in the sub-video, and after determining that each frame of image in the initial video segmentation is divided into 2 ⁇ 3 sub-images, calculate each sub-image The coordinates of the upper left corner of the image, according to the coordinates of the upper left corner of each sub-image, cut out the pixel data of the sub-picture that meets the above-mentioned width and height, and output the pixel data as the sub-image;
  • the video processing system further includes a splicing screen 300, the splicing screen includes a plurality of display terminals, and the plurality of display terminals are used to display each of the sub-videos respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本公开提供一种应用于拼接屏的视频处理方法,包括:将初始视频的每一帧图像都分割为多个子图像,所述初始视频包括M帧图像,其中,M为大于1的正整数;利用获得的所有子图像形成多个子视频,每个所述子视频均包括M帧子图像,且多个所述子视频的一帧持续的时间互相相同,其中,所有所述子视频的第i帧子图像拼成所述初始视频第i帧图像,并且,对于任意一个子视频而言,第i帧子图像在所述初始视频的第i帧图像中的相对位置与其他帧子图像在所述初始视频的相应帧图像中的相对位置相同,i为变量,且i为自然数,i依次为1至M。本公开还提供一种切割任务分配方法、一种计算机可读存储介质、一种调度服务器、一种执行服务器和一种视频处理***。

Description

切割方法、分配方法、介质、服务器、*** 技术领域
本公开涉及显示技术领域,具体地,涉及一种视频处理方法、一种切割任务分配方法、一种计算机可读存储介质和一种执行服务器、一种调度服务器和一种视频处理***。
背景技术
在一些需要进行大屏幕展示的场合,需要用到拼接屏。所述拼接屏包括多个显示终端,为了显示同一画面,同一个拼接屏中的不同显示终端需要显示所述画面中的不同部分。
发明内容
本公开的目的在于提供一种视频处理方法、一种切割任务分配方法、一种计算机可读存储介质和一种执行服务器、一种调度服务器和一种视频处理***。
作为本公开的第一个方面,提供一种视频处理方法,包括:
将初始视频的每一帧图像都分割为多个子图像,所述初始视频包括M帧图像,其中,M为大于1的正整数;
利用获得的所有子图像形成多个子视频,每个所述子视频均包括M帧子图像,且多个所述子视频的一帧持续的时间互相相同,其中,所有所述子视频的第i帧子图像拼成所述初始视频第i帧图像,其中,i为大于或等于1的自然数。
可选地,将初始视频的每一帧图像都分割为多个子图像的步骤包括:
接收分割请求,其中,所述分割请求包括对每一帧图像的分割方式;
根据所述分割请求对所述初始视频的每一帧图像进行分割。
可选地,所述分割信息包括每一帧所述图像被分割为的子图像的个数和一帧图像所分割成的多个子图像的布局信息,根据所述分割请求对所述初始视频的每一帧图像进行分割的步骤包括:
根据所述分割信息确定各个所述子图像的尺寸;
根据各个所述子图像的尺寸和一帧图像所分割成的多个子图像的布局信息对所述初始视频的每一帧图像进行分割。
可选地,根据所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割的步骤包括:
根据所述子图像的尺寸和各个所述子图像的布局信息确定各个所述子图像的基准点在相应的图像中的坐标;
根据各个所述子图像的基准点在相应的图像中的坐标和所述子图像的尺寸确定属于各个所述子图像的像素的信息,以获得各个所述子图像。
可选地,所述初始视频的每一帧图像均为矩形图像,每一帧所述图像被分割为的子图像的个数为a×b,每个子图像均为矩形图像,所述子图像的所述基准点为所述子图像的左上角的顶点。
可选地,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:
为各个所述子视频分配地址。
可选地,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:
确定各个所述子视频与播放各个子视频的各个显示终端之间的映射关系。
可选地,所述视频切割方法还包括在将初始视频的每一帧图像都分割为多个子图像的步骤之前进行的:
获取切割任务地址;
根据所述切割任务地址获取所述初始视频。
可选地,所述初始视频具有目标格式,根据所述任务地址获取所述初始视频的步骤包括:
获取所述任务地址处的源视频;
当所述源视频的格式与目标格式不一致时,对所述源视频进行格式转换,以获得所述初始视频。
可选地,还包括在利用获得的所有子图像形成多个子视频的步骤之后进行的:
根据各个子视频以及拼接屏中的多个显示终端的标识信息确定播放任务;
根据所述播放任务生成任务单;
将所述任务单下发至所述拼接屏的多个显示终端。
可选地,根据各个子视频以及拼接屏中的多个显示终端的标识信息确定播放任务的步骤包括:
确定所述播放任务所需要的显示终端的标识信息;
根据所述播放任务所需要的显示终端的标识信息确定所述播放任务中的主机和所述播放任务中的从机;
根据各个所述子视频以及用作主机的显示终端的标识信息、用作从机的显示终端的标识信息生成所述播放任务。
作为本公开的第二个方面,提供一种切割任务分配方法,包括:
根据接收到的源视频生成至少一个切割任务;
根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器,以使得接收到所述切割任务的服务器执行本公开所提供的上述视频处理方法。
可选地,所述预定条件为:
多个服务器中执行任务的数量不超过预定数量的服务器。
可选地,所述切割任务分配方法还包括在根据接收到的源视频生成至少一个切割任务的步骤以及根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤之间进行的:
根据各个服务器所执行的任务数量从少到多,对N个服务器进行排序;
根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤包括:
依次将生成的切割任务分别发送至排在前L位的服务器,其中,L与生成的切割任务的数量相同,且L<N。
可选地,所述切割任务分配方法还包括:
存储所述切割任务、以及执行所述切割任务的服务器之间的映射关系。
作为本公开的第三个方面,提供一种计算机可读存储介质,所述计算机可读存储介质用于存储可执行程序,当所述可执行程序被调用时能够实现以下方法之一:
本公开所提供的上述视频处理方法;
本公开所提供的上述切割任务分配方法。
作为本公开的第四个方面,提供一种执行服务器,所述执行服务器包括:
第一存储模块,其上存储有第一可执行程序;
一个或多个第一处理器,所述一个或多个第一处理器调用所述第一可执行程序,以实现所述视频处理方法;
第一I/O接口,所述第一I/O接口连接在第一处理器与第一存储模块间,以实现第一处理器与第一存储模块的信息交互。
作为本公开的第五个方面,提供一种调度服务器,所述调度服务器包括:
第二存储模块,其上存储有第二可执行程序;
一个或多个第二处理器,所述一个或多个第二处理器调用所述第二可执行程序,以实现所述切割任务分配方法;
第二I/O接口,所述第二I/O接口连接在第二处理器与第二存储模块间,以实现第二处理器与第二存储模块的信息交互。
作为本公开的第五个方面,提供一种视频处理***,所述视频处理***包括所述执行服务器和所述调度服务器。
可选地,所述视频处理***还包括拼接屏,所述拼接屏包括多个显示终端,多个所述显示终端用于分别显示各个所述子视频。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是本公开第一个方面所提供的视频处理方法的一种实施方式的示意图;
图2a中所示的是初始视频的第一帧图像的示意图;
图2b中所示的是所述初始视频的第二帧图像的示意图;
图3a中所示的是初始视频的第一帧图像被分割为四个子图像的示意图;
图3b中所示的是所述初始视频的第二帧图像被分割为四个子图像的示意图;
图4是步骤S110的一种实施方式的流程示意图;
图5是用户端显示的阵列示意图;
图6是步骤S112的一种实施方式的流程示意图;
图7是步骤S112的一种实施方式的流程示意图;
图8是步骤S112的另一种实施方式的流程示意图;
图9是拼接屏中,不同的显示终端显示两个不同的初始视频的示意图;
图10所示的是本公开所提供的视频处理方法的另一种实施方式的示意图;
图11所示的是步骤S105的流程示意图;
图12所示的是步骤S105b的流程示意图;
图13所示的是本公开所提供的视频处理方法的还一种实施方式的示意图;
图14是步骤S150的一种实施方式的流程示意图;
图15是本公开所提供的切割任务分配方法的一种实施方式的流程图;
图16是本公开所提供的切割任务分配方法的另一种实施方式的流程图;
图17是本公开所提供的切割任务分配方法的还一种实施方式的流程图;
图18是本公开所提供的切割任务分配方法的一种具体实施方式的流程图;
图19是本公开所提供的执行服务器执行切割任务时的流程图;
图20是本公开所提供的视频处理***的模块示意图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
作为本公开的一个方面,提供一种应用于拼接屏显示的视频处理方法,如图1所示,所述视频处理方法包括:
在步骤S110中,将初始视频的每一帧图像都分割为多个子图像,所述初始视频包括M帧图像,其中,M为大于1的正整数;
在步骤S120中,利用获得的所有子图像形成多个子视频,每个所述子视频均包括M帧子图像,且多个所述子视频的一帧持续的时间互相相同,其中,所有所述子视频的第i帧子图像拼成所述初始视频第i帧图像,对于任意一个子视频而言,第i帧子图像在所述初始视频的第i帧图像中的相对位置与其他帧子图像在所述初始视频的相应帧图像中的相对位置相同,i为变量,且i为自然数,i依次为1至M。
i依次为1至M的意思是,i分别为1,2,3,4,5……M。
下面举例说明何为“对于任意一个子视频而言,第i帧子图像在所述初始视频的第i帧图像中的相对位置与其他帧子图像在所述初始视频的相应帧图像中的相对位置相同”:
初始视频的每一帧图像都为矩形图像,将每一帧图像都分为两行两列的四个子图像。划分获得的子图像可以形成4个子视频,为了便于描述,将这4个子视频分别称为第一子视频、第二子视频、第三子视频、第四子视频。对于第一子视频而言:第1帧子图像为初始视频第一帧图像划分获得的四个子图像中第一行第一列的子图像;第2帧子图像为初始视频第二帧图像划分获得的四个子图像中第一行第一列的子图像;依次类推。
本公开的步骤S120中获得的多个子视频被下发至拼接屏中的各个显示终端,每个显示终端显示一个子视频,最终使得拼接屏显示所述初始视频。
在本公开中,各个子视频的第i帧子图像均是初始视频第i帧图像的一部分,各个子视频的时间轴与初始视频的时间轴相同,因此,在利用拼接屏的各个显示终端显示各个子视频时,在视觉上来看,相当于播放所述初始视频。
此处的“拼接屏”是指,多个显示终端拼接而成的显示终端组,拼接屏中的多个显示终端可以用于显示同一幅画面。
下面结合图2a、图2b、图3a、图3b对本公开所提供的视频处理方法进行解释。初始视频包括M帧图像。
图2a中所示的是初始视频的第一帧图像的示意图,图2b中所示的是所述初始视频的第二帧图像的示意图。
在步骤S110中,将所述初始视频的第一帧图像划分为如图3a中所示的四个子图像,将所述初始视频的第二帧图像划分为如图3b中所示的四个子图像,依次类推,直至将初始视频的M帧图像均划分为四个子图像为止。
在步骤S120中,利用各帧图像获得的各个子图像形成四个子视频。其中,这四个子视频的时间轴相同,且均与初始视频的时间轴相同。
所谓“多个所述子视频的一帧持续的时间互相相同”是指,当一个子视频的一帧持续时间为t ms时,其他各个子视频的一帧的持续时间均为t ms,其中,t>0。
所述初始视频的一帧持续的时间为T ms。作为一种可选实施方式,t=T。当然,本公开并不限于此,可以根据播放需求来设定t的具体数值,只要保证各个子视频中一帧持续的时间相同即可。因此,当同时播放各个子视频时,各个子视频的各帧之间能够同步,实现以拼接屏的方式显示初始视频。
在本公开中,对将初始视频分割成的子视频的个数不做具体的限定。作为一种可选实施方式,可以根据拼接屏中显示终端的数量确定。例如,当拼接屏包括四个显示终端是,将初始视频分割成四个子视频。即,将初始视频的每一帧图像都分割成四个子图像。
当然,本公开并不限于此。本公开所提供的视频处理方法可以由布置在云端的服务器执行,当用户想利用拼接屏显示视频时,可以根据拼接屏的实际情况生成分割请求,然后将分割请求上传至云端的服务器。
所述分割请求可以包括对初始视频的每一帧图像的分割方式(例如,将每一帧图像分割成的子图像的数量、各个子图像的形状、各个子图像的尺寸等)对应的分割信息。
相应地,如图4所示,步骤S110可以包括:
在步骤S111中,接收分割请求,其中,所述分割请求包括对每一帧图像的分割信息;
在步骤S112中,根据所述分割请求对所述初始视频的每一帧图像进行分割。
在本公开中,对步骤S111中的分割请求的发送方不做特殊限定。所述分割请求可以由拼接屏的管理员通过互联网发送(或者称为上传)至执行所述视频处理方法的服务器。
作为一种可选实施方式,所述分割请求中的每一帧图像的分割信息可以包括每一帧图像被分割为的子图像的个数和一帧图像所分割成的多个子图像的布局信息。相应地,如图6所示,步骤S112可以包括:
在步骤S112a中,根据所述分割信息确定各个所述子图像的尺寸;
在步骤S112b中,根据所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割。
在本公开中,初始视频的每一帧图像的尺寸都是已知的,根据每一帧图像对应的子图像的个数,可以确定各个子图像的尺寸。
在本公开中,对如何根据各个子图像的尺寸对所述初始视频的每一帧图像进行分割不做特殊的限定。作为一种可选实施方式,如图7所示,步骤S112b可以包括:
在步骤S112b 1中,根据所述子图像的尺寸和各个所述子图像的布局信息确定各个所述子图像的基准点在相应的图像中的坐标;
在步骤S112b 2中,根据各个所述子图像的基准点在相应的图像中坐标和所述子图像的尺寸确定属于各个所述子图像的像素的信息,以获得各个所述子图像。
在本公开中,所述基准点可以是在显示子图像时的第一个被显示的点。例如,子图像的基准点可以是该子图像的左上角顶点,通过确定子图像中的基准点在相应的图像中的坐标、根据子图像的尺寸可以确定属于各个子图像的像素的信息(该信息可以包括像素在图像中的位置信息、以及像素的灰阶 信息)。经过步骤S113b 3后,即可输出各个子图像的信息。
在本公开中,由初始视频所分割成的子视频的各帧子图像的形状可以由拼接屏中各个显示终端的轮廓所确定。
在图9中所示的实施方式中,拼接屏的形状为矩形,该拼接屏包括排列为2行4列的矩形显示终端,那么子图像的形状也可以为矩形。相应地,所述子图像的基准点为所述子图像左上角的顶点。
在本公开所提供的视频处理方法中,所述初始视频的每一帧图像均为矩形图像,每一帧所述图像被分割为的子图像的个数为a×b,每个子图像均为矩形图像,所述子图像的所述基准点为所述子图像的左上角的顶点,其中,a和b均为正整数。
作为一种可选实施方式,在上传了分割请求后,用户端可以显示分割方式缩略图。
为了便于用户终端生成分割请求,可以在用户终端显示阵列图,通过操作者选取行数和列数,可以生成所述分割请求。
如图5中所示,用户端的终端可以展示阵列图,操作者通过鼠标选取,可以获得需要将每一帧图像分割成的行数和列数,通过阵列图可以清楚地确定各个子图像的布局信息。此处所述的布局信息是指,由一个图像分割成的多个子图像的相对坐标信息、以及各个子图像在布局中的相对坐标信息与各个子图像在相应的图像中的位置坐标之间的对应关系。
相应地,如图8所示,步骤S112包括:
S112d中,根据所述布局信息确定各个子图像在布局中的相对坐标信息与各个子图像在相应的图像中的位置坐标之间的对应关系;
在步骤S112e中,将各个所述子图像在所述布局中的相对坐标信息转换成各个子图像在相应的图像中的位置坐标;
在步骤S112f中,根据各个所述子图像在相应的图像中的位置坐标对所述图像进行分割。
例如,在图9中所示的实施方式中,用户端提供2×4的分割请求,在2×4的布局中,A1到A4和B1到B4就代表8块一样大小的显示终端,无填 充部分显示第一初始视频,该第一初始视频被切割成2×3个子视频,填充了斜线的部分显示第二初始视频,该第二初始视频被切割成2×1的子视频。
在记录布局信息时,每一个最小单位(即,每个子视频)都会看作一个坐标点,显示终端A1对应的子视频的坐标为(0,0),显示终端B4对应的子视频坐标为(1,3),通过这些坐标记录就可以将分割方式和初始视频一一映射,并上传至执行所述视频处理方法的服务器。所述服务器接收到所述初始视频和所述布局信息后,执行步骤S112d至步骤S112f。
需要指出的是,此处的“子视频的坐标”其实是“子视频的身份标识信息”,表明该子视频在初始视频中的相对位置。
通过所述视频处理方法可以为不同的拼接屏分割视频,并实现不同的显示目的。
例如,显示目的为在多个显示终端排列为2行3列的拼接屏中显示视频。相应地,所述分割请求携带的信息可以包括将初始视频分割成2×3个子视频,其中,初始视频的每一帧图像都为矩形图像。
在本公开中,生成了多个子视频后,需要将子视频下发至拼接屏的各个显示终端。为了便于拼接屏下载相应的子视频,可选地,如图10所示,所述视频切割方法还包括在步骤S120之后进行的:
在步骤S130中,为各个所述子视频分配地址。
为各个子视频分配了地址后,拼接屏可以根据各个所述子视频的地址下载相应的子视频。
为了便于拼接屏准确地实现显示初始视频的效果,每个子视频所对应的显示终端应当是明确的。作为一种可选实施方式,可以先将各个子视频下载到本地的存储装置,然后再将各个子视频分配至相应的显示终端。作为另一种可选实施方式,可以直接将各个子视频下载至相应的显示终端。
在本公开中,对如何确定各个子视频与各个显示终端之间的对应关系不做特殊的限定。例如,可以在将各个子视频下载到本地存储装置后,先对各个子视频进行预览,然后根据预览结果确定各个子视频与各个显示终端之间的对应关系。
为了快速地将子视频下发至各个显示终端,可选地,如图10所示,所述视频处理方法还可以包括在步骤S120之后进行的:
在步骤S140中,确定各个所述子视频与播放各个子视频的各个显示终端之间的映射关系。
在本公开中,对步骤S130和步骤S140之间的先后顺序不做特殊的限定。可以先执行步骤S130、后执行步骤S140,也可以先执行步骤S140、后执行步骤S130,还可以同时执行步骤S130和步骤S140。
在本公开中,所述初始视频可以是存储在执行所述视频处理方法的服务器本地的视频资源。用户上传的分割请求中包括待分割的初始视频的标识信息(例如,视频编号),接收到所述分割请求后,首先确定所述初始视频,然后再执行步骤S110。
当然,所述初始视频也可以是存储在其他位置的视频资源。相应地,如图10所示,所述视频处理方法还可以包括在步骤S110之前进行的:
在步骤S100中,获取切割任务地址;
在步骤S105中,根据所述切割任务地址获取所述初始视频。
不同的电子设备只支持一种或几种格式的视频,为了能够使电子设备对不同格式的视频进行处理,需要对接收到的视频资源进行转码。相应地,当服务器支持对目标格式的视频资源进行切割处理时,所述初始视频具有目标格式时,如图11所示,步骤S105可以包括:
在步骤S105a中,获取所述任务地址处的源视频;
在步骤S105b中,当所述源视频的格式与目标格式不一致时,对所述源视频进行格式转换,以获得所述初始视频。
通常,视频的格式包括mp4、avi、wmv、rmbv等格式。当目标格式为mp4格式、而源视频的格式为非mp4格式时,可以将源视频转码为mp4格式。
作为本公开的一种实施方式,转码完成、获得所述初始视频后,可以将所述初始视频存储在所述切割任务地址。
具体地,如图12所示,步骤S105b还可以进一步包括:
在步骤S105b 1中,在本地存储所述源视频;
在步骤S105b 2中,生成转码任务;
在步骤S105b 3中,利用FFMPEG程序对源视频进行转码,以输出mp4格式的视频;
在步骤S105b4中,输出转码进度;
在步骤S105b5中,将转码后的文件的地址记录至数据库中;
完成转码任务。
如上文中所述,对初始视频进行切割获得的各个子视频需要下发给拼接屏的各个显示终端。在本公开中,对拼接屏中的那几个显示终端显示哪个子视频并不做特殊限定。如上文中所述,可以建立子视频与显示终端之间的映射关系。显示终端显示与之对应的子视频。
当然,本公开并不限于此,可选地,如图13所示,所述视频处理方法还可以包括在步骤S120之后进行的:
在步骤S150中,根据各个子视频确定播放任务;
在步骤S160中,根据所述播放任务生成任务单;
在步骤S170中,将所述任务单下发至所述拼接屏的多个显示终端。
显示终端接收到任务单后,可以根据该任务单显示任务单中所限定的子视频。
进一步可选地,如图14所示,步骤S150可以包括:
在步骤S151中,确定所述播放任务所需要的显示终端的标识信息;
在步骤S152中,根据所述播放任务所需要的显示终端的标识信息确定所述播放任务中的主机和所述播放任务中的从机;
在步骤S153中,根据各个所述子视频以及用作主机的显示终端的标识信息、用作从机的显示终端的标识信息生成所述播放任务。
在本公开中,用作主机的显示终端可以控制用作从机的显示终端显示相应的播放任务。
作为本公开的第二个方面,提供一种切割任务分配方法,如图15所示,所述切割任务分配方法包括:
在步骤S210中,根据接收到的源视频生成至少一个切割任务;
在步骤S220中,根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器,以使得接收到所述切割任务的服务器执行本公开所提供的上述视频处理方法。
在本公开中,设置在云端的多个分布式的服务器均可以执行本公开第一个方面所提供的视频处理方法。在本公开中,当接收到切割任务后,可以先确定能够执行所述视频处理方法的各个服务器的状态(该状态包括服务器当前执行的任务的数量)。
在本公开中,对所述预定条件不做特殊的限定。作为一种可选实施方式,所述预定条件为:
多个服务器中执行任务的数量不超过预定数量的服务器。
可以根据各个服务器的处理能力来确定所述预定数量。例如,所述预定数量可以为2。
当然,本公开并不限于此。作为另一种可选实施方式,如图16所示,所述切割任务分配方法还包括在步骤S210以及步骤S220之间进行的:
在步骤S215中,根据各个服务器所执行的任务数量从少到多,对N个服务器进行排序。
相应地,所述预定条件包括:在N个服务器中,排在前L位,其中,L和N均为正整数,L<N。
当N>2时,作为一种可选实施方式,L可以小于N/2。
为了便于监控,可选地,如图17所示,所述切割任务分配方法还包括:
在步骤S230中,存储所述切割任务、以及执行所述切割任务的服务器之间的映射关系。
下面结合图18对本公开所提供的切割任务分配方法的一种具体实施方式进行详细的描述。
在步骤S210中,根据接收到的源视频生成切割任务;
步骤S215被具体执行为:获取各个能够执行切割任务的服务器的配置信息、各个服务器的IP地址、以及各个执行服务器正在处理的任务数量,根据各个服务器所执行的任务数量从少到多,对各个服务器进行排序;
步骤S220被具体执行为:将2×3的切割任务优先分配执行任务数量少的服务器;
步骤S230被具体执行为:将所述切割任务的数据(在本公开中,可以将初始视频分为6个子视频的任务存成一条任务、也可以将初始视频分为6个子视频的任务存成多条任务)及执行所述任务数据对应的服务器的IP地址,以数据任务表的形式存储到数据库。
作为本公开的第三个方面,提供一种计算机可读存储介质,所述计算机可读存储介质用于存储可执行程序,当所述可执行程序被调用时能够实现以下方法之一:
本公开所提供的上述视频处理方法;
本公开所提供的上述切割任务分配方法。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、***、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且 可包括任何信息递送介质。
作为本公开的第四个方面,提供一种执行服务器,所述执行服务器包括:
第一存储模块,其上存储有第一可执行程序;
一个或多个第一处理器,所述一个或多个第一处理器调用所述第一可执行程序,以实现以本公开第一个方面所提供的视频处理方法;
第一I/O接口,所述第一I/O接口连接在第一处理器与第一存储模块间,以实现第一处理器与第一存储模块的信息交互。
第一处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;第一存储模块为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)。
第一I/O接口连接在第一处理器与第一存储模块间,能实现第一处理器与第一存储模块的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,第一处理器、第一存储模块和第一I/O接口通过总线相互连接,进而与显示终端的其它组件连接。
作为本公开的第五个方面,提供一种调度服务器,所述调度服务器包括:
第二存储模块,其上存储有第二可执行程序;
一个或多个第二处理器,所述一个或多个第二处理器调用所述第二可执行程序,以实现本公开所提供的上述切割任务分配方法;
第二I/O接口,所述第二I/O接口连接在第二处理器与第二存储模块间,以实现第二处理器与第二存储模块的信息交互。
第二处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;第一存储模块为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)。
第二I/O接口连接在第二处理器与第二存储模块间,能实现第二处理器与第二存储模块的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,第二处理器、第二存储模块和第二I/O接口通过总线相 互连接,进而与显示终端的其它组件连接。
作为本公开的第六个方面,提供一种视频处理***,如图20所示,所述视频处理***包括上述执行服务器100和上述调度服务器200。
在本公开中,执行服务器100和调度服务器200既可以部署在同一处,也可以部署在不同的位置。为了实现资源的充分利用,可选地,执行服务器100和调度服务器200均为云服务器。
调度服务器200用于向各个执行服务器分配切割任务。下面结合图19对执行服务器100执行由调度服务器200所分配的切割任务的具体过程的一种实施方式进行详细描述:
执行服务器每隔2秒查询由调度服务器生成的任务数据表;
获取本机IP被分配的切割任务;
将任务数据表中相应的切割任务的任务状态修改为“处理中”;
开始处理所述切割任务;
使用ffmpeg软件加载待切割的初始视频,包括:计算子视频中每一帧子图像的宽度和高度,确定将初始视频分割中的每一帧图像分割为2×3个子图像后,计算每一子图像左上角坐标,按照各个子图像左上角的坐标切割出符合上述宽度和高度的子图片的像素数据,输出所述像素数据作为子图像;
将每帧图像切割获得的子图像进行重新组装,获得6个视频格式文件的子视频;
将切割后的子视频的文件地址更新至所述任务数据表中;
完成切割。
可选地,所述视频处理***还包括拼接屏300,所述拼接屏包括多个显示终端,多个所述显示终端用于分别显示各个所述子视频。
可以理解的是,以上实施方式仅仅是为了说明本公开的原理而采用的示例性实施方式,然而本公开并不局限于此。对于本领域内的普通技术人员而言,在不脱离本公开的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本公开的保护范围。

Claims (21)

  1. 一种应用于拼接屏显示的视频处理方法,包括:
    将初始视频的每一帧图像都分割为多个子图像,所述初始视频包括M帧图像,其中,M为大于1的正整数;
    利用获得的所有子图像形成多个子视频,每个所述子视频均包括M帧子图像,且多个所述子视频的一帧持续的时间互相相同,其中,所有所述子视频的第i帧子图像拼成所述初始视频第i帧图像,并且,对于任意一个子视频而言,第i帧子图像在所述初始视频的第i帧图像中的相对位置与其他帧子图像在所述初始视频的相应帧图像中的相对位置相同,i为变量,且i为自然数,i依次为1至M。
  2. 根据权利要求1所述的视频处理方法,其中,将初始视频的每一帧图像都分割为多个子图像的步骤包括:
    接收分割请求,其中,所述分割请求包括对每一帧图像的分割信息;
    根据所述分割请求对所述初始视频的每一帧图像进行分割。
  3. 根据权利要求2所述的视频处理方法,其中,所述分割信息包括每一帧所述图像被分割为的子图像的个数和一帧图像所分割成的多个子图像的布局信息,根据所述分割请求对所述初始视频的每一帧图像进行分割的步骤包括:
    根据所述分割信息确定各个所述子图像的尺寸;
    根据各个所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割。
  4. 根据权利要求3所述的视频处理方法,其中,根据所述子图像的尺寸和各个所述子图像的布局信息对所述初始视频的每一帧图像进行分割的步骤包括:
    根据所述子图像的尺寸和各个所述子图像的布局信息确定各个所述子图像的基准点在相应的图像中的坐标;
    根据各个所述子图像的基准点在相应的图像中的坐标和所述子图像的尺寸确定属于各个所述子图像的像素的信息,以获得各个所述子图像。
  5. 根据权利要求4所述的视频处理方法,其中,所述初始视频的每一帧图像均为矩形图像,每个子图像均为矩形图像,所述子图像的所述基准点为所述子图像的左上角的顶点。
  6. 根据权利要求5所述的视频处理方法,其中,所述分割信息包括每一帧所述图像被分割为a行b列子图像,其中,a、b均为正整数。
  7. 根据权利要求1至6中任意一项所述的视频处理方法,其中,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:
    为各个所述子视频分配地址。
  8. 根据权利要求7所述的视频处理方法,其中,所述视频处理方法还包括在利用获得的所有子图像形成多个子视频之后进行的:
    确定各个所述子视频与播放各个子视频的各个显示终端之间的映射关系。
  9. 根据权利要求1至6中任意一项所述的视频处理方法,其中,所述视频切割方法还包括在将初始视频的每一帧图像都分割为多个子图像的步骤之前进行的:
    获取切割任务地址;
    根据所述切割任务地址获取所述初始视频。
  10. 根据权利要求9所述的视频处理方法,其中,所述初始视频具有目 标格式,根据所述任务地址获取所述初始视频的步骤包括:
    获取所述任务地址处的源视频;
    当所述源视频的格式与目标格式不一致时,对所述源视频进行格式转换,以获得所述初始视频。
  11. 根据权利要求1至6中任意一项所述的视频处理方法,其中,还包括在利用获得的所有子图像形成多个子视频的步骤之后进行的:
    根据各个子视频确定播放任务;
    根据所述播放任务生成任务单;
    将所述任务单下发至所述拼接屏的多个显示终端。
  12. 根据权利要求11所述的视频处理方法,其中,根据各个子视频以及拼接屏中的多个显示终端的标识信息确定播放任务的步骤包括:
    确定所述播放任务所需要的显示终端的标识信息;
    根据所述播放任务所需要的显示终端的标识信息确定所述播放任务中的主机和所述播放任务中的从机;
    根据各个所述子视频以及用作主机的显示终端的标识信息、用作从机的显示终端的标识信息生成所述播放任务。
  13. 一种切割任务分配方法,包括:
    根据接收到的源视频生成至少一个切割任务;
    根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器,以使得接收到所述切割任务的服务器执行权利要求1至12中任意一项所述的视频处理方法。
  14. 根据权利要求13所述的切割任务分配方法,其中,所述预定条件为:
    多个服务器中执行任务的数量不超过预定数量的服务器。
  15. 根据权利要求13所述的切割任务分配方法,其中,所述切割任务分配方法还包括在根据接收到的源视频生成至少一个切割任务的步骤以及根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤之间进行的:
    根据各个服务器所执行的任务数量从少到多,对N个服务器进行排序;
    根据各个服务器的状态将所述切割任务分配至满足预定条件的服务器的步骤包括:
    依次将生成的切割任务分别发送至排在前L位的服务器,其中,L与生成的切割任务的数量相同,且L<N。
  16. 根据权利要求13至15中任意一项所述的切割任务分配方法,其中,所述切割任务分配方法还包括:
    存储所述切割任务、以及执行所述切割任务的服务器之间的映射关系。
  17. 一种计算机可读存储介质,所述计算机可读存储介质用于存储可执行程序,当所述可执行程序被调用时能够实现以下方法之一:
    权利要求1至12中任意一项所述的视频切割方法;
    权利要求13至16中任意一项所述的切割任务分配方法。
  18. 一种执行服务器,所述执行服务器包括:
    第一存储模块,其上存储有第一可执行程序;
    一个或多个第一处理器,所述一个或多个第一处理器调用所述第一可执行程序,以实现权利要求1至12中任意一项所述的视频处理方法;
    第一I/O接口,所述第一I/O接口连接在第一处理器与第一存储模块间,以实现第一处理器与第一存储模块的信息交互。
  19. 一种调度服务器,所述调度服务器包括:
    第二存储模块,其上存储有第二可执行程序;
    一个或多个第二处理器,所述一个或多个第二处理器调用所述第二可执行程序,以实现权利要求13至16中任意一项所述的切割任务分配方法;
    第二I/O接口,所述第二I/O接口连接在第二处理器与第二存储模块间,以实现第二处理器与第二存储模块的信息交互。
  20. 一种视频处理***,所述视频处理***包括权利要求18所述的执行服务器和权利要求19所述的调度服务器。
  21. 根据权利要求20所述的视频处理***,其中,所述视频处理***还包括拼接屏,所述拼接屏包括多个显示终端,多个所述显示终端用于分别显示各个所述子视频。
PCT/CN2020/093395 2020-05-29 2020-05-29 切割方法、分配方法、介质、服务器、*** WO2021237708A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202080000852.4A CN114072760B (zh) 2020-05-29 2020-05-29 切割方法、分配方法、介质、服务器、***
US17/309,612 US11995371B2 (en) 2020-05-29 2020-05-29 Dividing method, distribution method, medium, server, system
PCT/CN2020/093395 WO2021237708A1 (zh) 2020-05-29 2020-05-29 切割方法、分配方法、介质、服务器、***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093395 WO2021237708A1 (zh) 2020-05-29 2020-05-29 切割方法、分配方法、介质、服务器、***

Publications (1)

Publication Number Publication Date
WO2021237708A1 true WO2021237708A1 (zh) 2021-12-02

Family

ID=78745467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093395 WO2021237708A1 (zh) 2020-05-29 2020-05-29 切割方法、分配方法、介质、服务器、***

Country Status (3)

Country Link
US (1) US11995371B2 (zh)
CN (1) CN114072760B (zh)
WO (1) WO2021237708A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116437028B (zh) * 2023-06-14 2023-09-08 深圳市视景达科技有限公司 一种视频显示方法及***
CN117173161B (zh) * 2023-10-30 2024-02-23 杭州海康威视数字技术股份有限公司 内容安全检测方法、装置、设备及***
CN117931458B (zh) * 2024-03-21 2024-06-25 北京壁仞科技开发有限公司 一种推理服务调度方法、装置、处理器及芯片

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130122960A1 (en) * 2011-11-16 2013-05-16 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN104657101A (zh) * 2015-02-12 2015-05-27 武汉新蜂乐众网络技术有限公司 一种图像拼接显示方法及***
CN105739935A (zh) * 2016-01-22 2016-07-06 厦门美图移动科技有限公司 一种多终端联合显示方法、装置及***
CN108093205A (zh) * 2016-11-23 2018-05-29 杭州海康威视数字技术股份有限公司 一种跨屏同步显示方法及***
CN109213464A (zh) * 2018-09-26 2019-01-15 永州市金蚂蚁新能源机械有限公司 一种图像拼接显示方法及***

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2617119A1 (en) * 2008-01-08 2009-07-08 Pci Geomatics Enterprises Inc. Service oriented architecture for earth observation image processing
CN103838779B (zh) * 2012-11-27 2019-02-05 深圳市腾讯计算机***有限公司 复用空闲计算资源的云转码方法及***、分布式文件装置
CN103606158A (zh) 2013-11-29 2014-02-26 深圳市龙视传媒有限公司 一种视频剪切的预处理方法及终端
US9922394B2 (en) * 2014-12-05 2018-03-20 Samsung Electronics Co., Ltd. Display apparatus and method for displaying split screens thereof
US10607571B2 (en) * 2017-08-14 2020-03-31 Thomas Frederick Utsch Method and system for the distribution of synchronized video to an array of randomly positioned display devices acting as one aggregated display device
CN106373493A (zh) * 2016-09-27 2017-02-01 京东方科技集团股份有限公司 一种拼接屏、拼接屏的驱动方法、装置及显示设备
CN107229676A (zh) 2017-05-02 2017-10-03 国网山东省电力公司 基于大数据的分布式视频切割模型及应用
CN109495697A (zh) * 2017-09-11 2019-03-19 广州彩熠灯光有限公司 基于视频切割的多屏幕扩展方法、***、存储介质及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130122960A1 (en) * 2011-11-16 2013-05-16 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN104657101A (zh) * 2015-02-12 2015-05-27 武汉新蜂乐众网络技术有限公司 一种图像拼接显示方法及***
CN105739935A (zh) * 2016-01-22 2016-07-06 厦门美图移动科技有限公司 一种多终端联合显示方法、装置及***
CN108093205A (zh) * 2016-11-23 2018-05-29 杭州海康威视数字技术股份有限公司 一种跨屏同步显示方法及***
CN109213464A (zh) * 2018-09-26 2019-01-15 永州市金蚂蚁新能源机械有限公司 一种图像拼接显示方法及***

Also Published As

Publication number Publication date
US11995371B2 (en) 2024-05-28
US20220308821A1 (en) 2022-09-29
CN114072760B (zh) 2024-06-25
CN114072760A (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
WO2021237708A1 (zh) 切割方法、分配方法、介质、服务器、***
RU2639651C2 (ru) Идентификация изображения и организация согласно макету без вмешательства пользователя
CN108924582B (zh) 视频录制方法、计算机可读存储介质及录播***
CN106572139B (zh) 多终端控制方法、终端、服务器和***
CN110149518B (zh) 媒体数据的处理方法、***、装置、设备以及存储介质
CN111107386A (zh) 直播视频的回看方法、装置、电子设备、***及存储介质
US11350151B2 (en) Methods, systems and devices that enable a user of a mobile phone to select what content is displayed on a screen of a consumer electronic device on display
CN112153459A (zh) 用于投屏显示的方法和装置
WO2018120519A1 (zh) 图像处理的方法和装置
CN114816308B (zh) 信息分区显示方法及相关设备
CN109218817B (zh) 一种显示虚拟礼物提示消息的方法和装置
US10467279B2 (en) Selecting digital content for inclusion in media presentations
JP7471510B2 (ja) ピクチャのビデオへの変換の方法、装置、機器および記憶媒体
CN109660852B (zh) 录制视频发布前的视频预览方法、存储介质、设备及***
CN111064700B (zh) 云游戏的下载方法、装置及***
CN112714341B (zh) 信息获取方法、云化机顶盒***、实体机顶盒及存储介质
CN107027056B (zh) 一种桌面配置方法、服务器及客户端
EP4089533A2 (en) Pooling user interface engines for cloud ui rendering
JP2024517702A (ja) シングルストリームを利用して関心領域の高画質映像を提供する方法、コンピュータ装置、およびコンピュータプログラム
CN113824988B (zh) 一种适配不同场景的点播方法及终端
CN110337043A (zh) 电视的视频播放方法、装置及存储介质
US11838593B2 (en) Multi-mode selectable media playback
CN113099247B (zh) 虚拟资源处理方法、装置、服务器、存储介质及程序产品
CN107566904A (zh) 一种资源数据更新方法及机顶盒设备
CN115278278B (zh) 一种页面显示方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937581

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937581

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20937581

Country of ref document: EP

Kind code of ref document: A1