CN111064930A - Split screen display method, display terminal and storage device - Google Patents

Split screen display method, display terminal and storage device Download PDF

Info

Publication number
CN111064930A
CN111064930A CN201911305117.XA CN201911305117A CN111064930A CN 111064930 A CN111064930 A CN 111064930A CN 201911305117 A CN201911305117 A CN 201911305117A CN 111064930 A CN111064930 A CN 111064930A
Authority
CN
China
Prior art keywords
video data
split
screen
display
screen area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911305117.XA
Other languages
Chinese (zh)
Other versions
CN111064930B (en
Inventor
林仁华
朱龙
江斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201911305117.XA priority Critical patent/CN111064930B/en
Publication of CN111064930A publication Critical patent/CN111064930A/en
Application granted granted Critical
Publication of CN111064930B publication Critical patent/CN111064930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a split-screen display method, a display terminal and a storage device, wherein the split-screen display method comprises the following steps: the method comprises the steps that a display terminal obtains video data and weight information of a plurality of camera devices, wherein the display terminal comprises a display interface, and the display interface comprises a plurality of split screen areas; determining a split screen area corresponding to video data of each camera device based on the weight information of the plurality of camera devices; and respectively displaying the video data of the plurality of camera devices in the corresponding split screen areas. According to the scheme, the video monitoring efficiency can be improved.

Description

Split screen display method, display terminal and storage device
Technical Field
The present application relates to the field of information technologies, and in particular, to a split-screen display method, a display terminal, and a storage device.
Background
Thanks to the application of a camera device such as a monitoring camera to a place such as a home, a shop, a garden, and the like, and the rapid development of electronic technology, people are increasingly interested in viewing video data photographed by the camera device through a display terminal such as a mobile terminal, a microcomputer, and the like.
Meanwhile, in a scene of multipoint deployment of the camera devices, people generally want to quickly preview video data shot by a plurality of camera devices and quickly locate a monitoring picture which is focused, so that the viewing speed of the video data is increased, and the monitoring efficiency is improved. In view of this, how to improve the video monitoring efficiency becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a split screen display method, a display terminal and a storage device, which can improve video monitoring efficiency.
In order to solve the above problem, a first aspect of the present application provides a split screen display method, including: the method comprises the steps that a display terminal obtains video data and weight information of a plurality of camera devices, wherein the display terminal comprises a display interface, and the display interface comprises a plurality of split screen areas; determining a split screen area corresponding to video data of each camera device based on the weight information of the plurality of camera devices; and respectively displaying the video data of the plurality of camera devices in the corresponding split screen areas.
In order to solve the above problem, a second aspect of the present application provides a display terminal, which includes a memory, a processor, and a human-computer interaction circuit, where the memory and the human-computer interaction circuit are coupled to the processor, and when the memory, the processor, and the human-computer interaction circuit are operated, the split-screen display method in the first aspect can be implemented.
In order to solve the above problem, a third aspect of the present application provides a storage device storing program instructions executable by a processor, the program instructions being configured to implement the split-screen display method of the first aspect.
In the scheme, the display terminal acquires video data and weight information of a plurality of camera devices and comprises a display interface which comprises a plurality of split screen areas, thereby determining a split screen area corresponding to the video data of each of the plurality of image pickup devices based on the weight information of the plurality of image pickup devices, and then the video data of a plurality of camera devices are respectively displayed in the corresponding split screen areas, so that the video data of a plurality of camera devices can be simultaneously displayed on the display interface of the display terminal, the viewing speed of the video data is improved, meanwhile, the split screen area corresponding to the video data of each camera device is determined based on the weight information of the plurality of camera devices, so that a user can firstly view interested video data or focused video data, the video data can be quickly positioned, and the video monitoring efficiency is improved.
Drawings
FIG. 1 is a schematic flowchart of an embodiment of a split-screen display method according to the present application;
FIG. 2 is a block diagram of one embodiment of the display terminal of FIG. 1;
FIG. 3 is a block diagram of another embodiment of the display terminal of FIG. 1;
FIG. 4 is a block diagram of a further embodiment of the display terminal of FIG. 1;
FIG. 5 is a flowchart illustrating an embodiment of adjusting a weight value corresponding to video data of an image capturing device;
FIG. 6 is a flowchart illustrating another embodiment of adjusting weight values corresponding to video data of an image capturing device;
FIG. 7 is a flowchart illustrating another embodiment of adjusting weight values corresponding to video data of an image capture device;
FIG. 8 is a schematic flowchart of another embodiment of a split-screen display method according to the present application;
FIG. 9 is a block diagram of an embodiment of a display terminal of the present application;
FIG. 10 is a block diagram of another embodiment of a display terminal according to the present application;
FIG. 11 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a split-screen display method according to the present application. Specifically, the method may include the steps of:
step S11: the display terminal acquires video data and weight information of the plurality of camera devices, wherein the display terminal comprises a display interface, and the display interface comprises a plurality of split screen areas.
In this embodiment, the plurality of image pickup devices may be 1 image pickup device, 2 image pickup devices, 3 image pickup devices, and the like, and this embodiment is not particularly limited herein. The plurality of image pickup devices may include: infrared night vision cameras, general network cameras, etc., and the embodiment is not particularly limited herein.
In this embodiment, the display terminal may include but is not limited to: a mobile terminal and a microcomputer, wherein the mobile terminal may include a smart phone, a tablet computer, and the like, and the embodiment is not particularly limited herein.
Taking a smart phone as an example, please refer to fig. 2, and fig. 2 is a schematic diagram of a framework of an embodiment of the display terminal in fig. 1. The display terminal shown in fig. 2 is a smart phone, as shown in fig. 2, in a default condition, default display areas of video data are arranged in a form of multiple rows and one column in a display interface, so that video data that a user can browse at one time is limited, and video monitoring efficiency is low. In an implementation scene, in order to facilitate a user to actively trigger split-screen display, so as to improve video monitoring efficiency, a split-screen preview instruction input by the user can be received, and then video data and weight information of a plurality of camera devices are acquired based on the received split-screen preview instruction. In a specific implementation scenario, in order to further facilitate the user to quickly enter the split-screen preview, a split-screen preview button may be set on the display interface, so as to receive a split-screen preview instruction generated by clicking/touching the split-screen preview button by the user. Specifically, the display terminal may be run with a display application, the display application is configured to provide a display interface, and the split-screen preview button may be set on a home page of the display application.
Referring to fig. 3, fig. 3 is a schematic diagram of a frame of the display terminal of fig. 1 according to another embodiment. The display interface may include a plurality of split screen regions, as shown in fig. 3, the plurality of split screen regions may be arranged in an array of 2 × 2, or the plurality of split screen regions may also be arranged in an array of 3 × 3, which is not limited in this embodiment. In addition, referring to fig. 4, in order to facilitate the user to obtain a larger display size, when the user rotates the display terminal, the split screen area may also rotate with the display terminal, for example, when the display terminal detects that the user rotates the display terminal to the horizontal position shown in fig. 4, the plurality of split screen areas may be simultaneously rotated to the horizontal position, thereby increasing the display size of the split screen area.
In this embodiment, the weight information may include a weight value corresponding to video data of the image pickup device, and in an implementation scenario, the weight value may be preset according to an importance degree of a point location deployed by the image pickup device, specifically, the weight value and the important program have a positive correlation, that is, the higher the importance degree is, the larger the weight value is. In another implementation scenario, the weight value may also be determined according to the click rate of each video data during the process of browsing the video data by the user, specifically, the weight value and the click rate have a positive correlation, that is, the larger the click rate is, the higher the weight value is.
In one implementation scenario, video data captured by a plurality of image capturing devices is stored in a server, and the display terminal can acquire the video data of the plurality of image capturing devices from the server.
Step S12: based on the weight information of the plurality of image pickup devices, a split screen area corresponding to the video data of each image pickup device is determined.
In one implementation scenario, in order to enable a user to quickly locate video data of important interest, the weight information may be parsed, so as to obtain a weight value corresponding to the video data, and a split screen area for displaying the video data is determined in the plurality of split screen areas based on the weight value corresponding to the video data, and the larger the weight value corresponding to the video data is, the earlier the position of the split screen area for displaying the video data is in the plurality of split screen areas.
In a specific implementation scenario, please refer to fig. 3 in combination, the obtained video data of the multiple image capturing devices may include video data 01, video data 02, video data 03, and video data 04, and the weight value corresponding to the video data 01 is 10, the weight value corresponding to the video data 02 is 12, the weight value corresponding to the video data 03 is 9, and the weight value corresponding to the video data 04 is 11, and accordingly, the sequence of the video data 01 to the video data 04 is: video data 02, video data 04, video data 01, and video data 03 are displayed in the upper left-hand split screen area, the upper right-hand split screen area, the lower left-hand split screen area, and the lower right-hand split screen area shown in fig. 3, respectively.
In another specific implementation scenario, the number of the acquired video data of the multiple image pickup devices may also be greater than the number of the multiple split screen areas included in the real interface, please refer to fig. 3 in combination, the acquired video data of the multiple image pickup devices may include video data 01, video data 02, video data 03, video data 04, video data 05, and video data 06, and the weight value corresponding to the video data 01 is 10, the weight value corresponding to the video data 02 is 12, the weight value corresponding to the video data 03 is 9, the weight value corresponding to the video data 04 is 11, the weight value corresponding to the video data 05 is 7, and the weight value corresponding to the video data 06 is 14, and accordingly, the sequence of the video data 01 to the video data 06 is: video data 06, video data 02, video data 04, video data 01 and video data 03, video data 05, video data 06 can be displayed in the upper left split screen area, video data 02 in the upper right split screen area, video data 04 in the lower left split screen area, video data 01 in the lower right split screen area as shown in fig. 3, and when the user wants to view other video data, the display interface of the display terminal can be slid, at this time, the display terminal receives a display switching instruction generated by the user sliding the display interface, thereby enabling the display terminal to switch the video data displayed in the split screen area in the display interface based on the display switching instruction, therefore, the upper left split screen area can be switched to display as video data 03, the upper right split screen area can be switched to display as video data 05, and the lower left split screen area and the lower right split screen area can be free of display.
Step S13: and respectively displaying the video data of the plurality of camera devices in the corresponding split screen areas.
After the split screen area corresponding to the video data of each image pickup device is determined based on the weight information of the image pickup devices, the video data of the image pickup devices can be respectively displayed in the split screen areas corresponding thereto.
In the scheme, the display terminal acquires video data and weight information of a plurality of camera devices and comprises a display interface which comprises a plurality of split screen areas, thereby determining a split screen area corresponding to the video data of each of the plurality of image pickup devices based on the weight information of the plurality of image pickup devices, and then the video data of a plurality of camera devices are respectively displayed in the corresponding split screen areas, so that the video data of a plurality of camera devices can be simultaneously displayed on the display interface of the display terminal, the viewing speed of the video data is improved, meanwhile, the split screen area corresponding to the video data of each camera device is determined based on the weight information of the plurality of camera devices, so that a user can firstly view interested video data or focused video data, the video data can be quickly positioned, and the video monitoring efficiency is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of adjusting a weight value corresponding to video data of an image capturing device. Specifically, the method may include the steps of:
step S51: and receiving a full screen viewing instruction generated by clicking/touching the split screen area by the user.
In the process of viewing the video data of the camera device, the user may click/touch the split screen area, so that the video data displayed in the split screen area can be displayed in a full screen manner, and the video data displayed in the split screen area can be considered as the video data focused by the user.
In a specific implementation scenario, when a full-screen viewing instruction generated by clicking/touching a split-screen area by a user is received, video data displayed on the split-screen area clicked/touched by the user may be displayed in a full screen based on the full-screen viewing instruction.
Step S52: and adding a preset weight increment value to a weight value corresponding to the video data of the camera device displayed in the split screen area clicked/touched by the user.
In this embodiment, the preset weight increment value may be 1, and in other implementation scenarios, the preset weight increment value may also be other values, for example: 2. 3, 4, etc., and the present embodiment is not particularly limited herein.
According to the scheme, the full-screen viewing instruction generated by clicking/touching the split screen area by the user is received, so that the weighted value corresponding to the video data of the camera device displayed in the split screen area clicked/touched by the user is added with the preset weighted increment value, further, the weighted value corresponding to the video data can be adjusted according to the attention of the user in the process of viewing the video data by the user, the split screen area of the video data can be readjusted based on the adjusted weighted value, further, the split screen area for displaying the video data can be adjusted along with the attention of the user to the video data, the video monitoring efficiency is favorably improved, and the user experience is favorably improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating another embodiment of adjusting a weight value corresponding to video data of an image capturing device. Specifically, the method may include the steps of:
step S61: and receiving a full screen viewing instruction generated by clicking/touching the split screen area by the user.
Please refer to step S51 in the above embodiment.
Step S62: and displaying the video data displayed in the split screen area clicked/touched by the user in a full screen mode based on the full screen viewing instruction.
When a full-screen viewing instruction generated by clicking/touching the split-screen area by the user is received, the video data displayed in the split-screen area clicked/touched by the user can be displayed in a full screen based on the full-screen viewing instruction, so that the user can acquire the detail information contained in the video data.
Step S63: and if the duration of the full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds a preset duration threshold, adding a preset weight increment value to the weight value corresponding to the video data of the camera device displayed in the split-screen area clicked/touched by the user.
When the duration of full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds a preset duration threshold, it indicates that the attention of the user to the video data is high, and accordingly, a preset weight increment value may be added to a weight value corresponding to the video data of the image pickup device displayed in the split-screen area clicked/touched by the user, and the preset weight increment value may be set according to an actual application scenario, for example: 1. 2, 3, 4, etc., and the embodiment is not particularly limited herein. In a specific implementation scenario, when the duration of full-screen display of video data displayed in a split-screen area clicked/touched by a user exceeds a preset duration threshold, a product of a quotient between the duration and the preset duration threshold and a preset weight increment value may be calculated, and a product obtained by adding a calculated weight value to a weight value corresponding to video data of an image pickup device displayed in the split-screen area clicked/touched by the user may be calculated.
According to the scheme, a full-screen viewing instruction generated by clicking/touching the split screen area by a user is received, so that the video data displayed in the split screen area clicked/touched by the user is displayed in a full-screen mode based on the full-screen viewing instruction, if the duration of full-screen display of the video data displayed in the split screen area clicked/touched by the user exceeds a preset duration threshold, a preset weight increment value is added to a weight value corresponding to the video data of the camera device displayed in the split screen area clicked/touched by the user, and then the weight value corresponding to the video data can be adjusted according to the attention of the user in the process that the user views the video data, so that the split screen area of the video data can be readjusted based on the adjusted weight value, and further the split screen area for displaying the video data can be adjusted along with the attention of the user to the video data, the video monitoring efficiency and the user experience are improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating another embodiment of adjusting a weight value corresponding to video data of an image capturing device. Specifically, the method may include the steps of:
step S71: and receiving a full screen viewing instruction generated by clicking/touching the split screen area by the user.
Please refer to step S51 in the above embodiment.
Step S72: and displaying the video data displayed in the split screen area clicked/touched by the user in a full screen mode based on the full screen viewing instruction.
Please refer to step S62 in the above embodiment.
Step S73: and adding a preset weight increment value to a weight value corresponding to the video data of the camera device displayed in the split screen area clicked/touched by the user.
Please refer to step S52 in the above embodiment.
Step S74: and if the duration of the full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds a preset duration threshold, adding a preset weight increment value to the weight value corresponding to the video data of the camera device displayed in the split-screen area clicked/touched by the user.
Please refer to step S63 in the above embodiment.
According to the scheme, in the process of checking the video data by the user, the weighted value corresponding to the video data can be adjusted according to the attention of the user, so that the split screen area of the video data can be readjusted based on the adjusted weighted value, the split screen area for displaying the video data can be adjusted along with the attention of the user to the video data, and the video monitoring efficiency and the user experience are favorably improved.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a split-screen display method according to another embodiment of the present application.
Specifically, the method may include the steps of:
step S81: and receiving a sequencing adjustment instruction generated by dragging the split screen area by the user.
In one implementation scenario, when a user wants to manually adjust a split-screen area of a video data display, the split-screen area may be dragged, and specifically, a gesture operation of dragging the split-screen area by the user may be to hold the split-screen area for several seconds, and then move the split-screen area to a certain desired position.
For example, video data captured by a camera device disposed in a doorway may be of significant concern to a user, and the user may drag a split screen area where the video data is displayed.
Step S82: and taking the video data displayed in the split screen area dragged by the user as a target video based on the sequencing adjustment instruction.
In a specific implementation scenario, the order adjustment instruction may include a start coordinate and an end coordinate of the user dragging action, and may use video data displayed in the split screen area where the start coordinate is located as the target video.
Step S83: and re-determining a split screen area for displaying the target video, setting a weight value corresponding to the target video as a preset identifier, and recording the display position of the split screen area for displaying the target video.
In a specific implementation scenario, the order adjustment instruction may include a start point coordinate and an end point coordinate of the user drag action, and a split screen area where the end point coordinate is located may be used as a split screen area for displaying the target video. In addition, in order to form memory for the user's operation, so that the next time the split-screen display is started, the target video can be viewed at the expected position of the user, the weight value corresponding to the target video can be set as a preset identifier, a split screen area for displaying the target video is recorded, therefore, when the next split-screen display is started, the video data with the weighted value as the preset mark can be screened from the video data of the plurality of camera devices, and determining a split screen area for displaying the screened video data based on the recorded display position, for the remaining video data, a split screen area for displaying the remaining video data may be determined in the remaining split screen area based on the weight value corresponding to the remaining video data, wherein, the larger the weight value corresponding to the remaining video data is, the earlier the position of the split screen area for displaying the remaining video data in the remaining split screen area is.
According to the scheme, the split screen area displayed by the target video can be adjusted in a mode that the user manually drags the split screen area, so that the user can quickly position video data shot by the camera device which focuses on, and the video monitoring efficiency is improved.
Referring to fig. 9, fig. 9 is a schematic diagram of a framework of an embodiment of a display terminal 90 according to the present application. The display terminal 90 comprises an obtaining module 91, a determining module 92 and a display module 93, wherein the obtaining module 91 is used for obtaining video data and weight information of a plurality of camera devices, the display module 93 of the display terminal 90 is used for providing a display interface, the display interface comprises a plurality of split screen areas, and the determining module 92 is used for determining the split screen area corresponding to the video data of each camera device based on the weight information of the plurality of camera devices; the display module 93 is configured to display the video data of the plurality of image capturing devices in the corresponding split areas. In one implementation scenario, the display terminal 90 may comprise a mobile terminal, a microcomputer.
In the above scheme, the display terminal 90 acquires video data and weight information of a plurality of image pickup devices, and the display terminal 90 includes a display interface, the display interface includes a plurality of split screen regions, thereby determining a split screen area corresponding to the video data of each of the plurality of image pickup devices based on the weight information of the plurality of image pickup devices, and then the video data of a plurality of camera devices are respectively displayed in the corresponding split screen areas, so that the video data of a plurality of camera devices can be simultaneously displayed on the display interface of the display terminal 90, which is beneficial to improving the viewing speed of the video data, meanwhile, the split screen area corresponding to the video data of each camera device is determined based on the weight information of the plurality of camera devices, so that a user can firstly view interested video data or focused video data, the video data can be quickly positioned, and the video monitoring efficiency is improved.
In some embodiments, the display terminal 90 further includes a receiving module for receiving a split-screen preview instruction input by a user. In an implementation scenario, the display interface is provided with a split-screen preview button, and the receiving module is specifically configured to receive a split-screen preview instruction generated by a user clicking/touching the split-screen preview button.
In some embodiments, the display terminal 90 further includes a running module, configured to run a display application, where the display application is configured to provide a display interface, and the split-screen preview button is disposed on a home page of the display application.
In some embodiments, the weight information includes a weight value corresponding to video data of the image pickup device, the determining module 92 includes an analyzing submodule configured to analyze the weight information and obtain the weight value corresponding to the video data, and the determining module 92 further includes a determining submodule configured to determine a split screen area for displaying the video data in the plurality of split screen areas based on the weight value corresponding to the video data, wherein the larger the weight value corresponding to the video data is, the earlier the position of the split screen area for displaying the video data in the plurality of split screen areas is.
In some embodiments, the receiving module is further configured to receive a full screen viewing instruction generated by the user clicking/touching the split screen area, and the display terminal 90 further includes a weight value adjusting module configured to add a preset weight increment value to a weight value corresponding to video data of the image pickup device displayed in the split screen area clicked/touched by the user.
Different from the embodiment, a full-screen viewing instruction generated by clicking/touching the split screen area by the user is received, so that a preset weight increment value is added to a weight value corresponding to video data of the camera device displayed in the split screen area clicked/touched by the user, and then the weight value corresponding to the video data can be adjusted according to the attention of the user in the process of viewing the video data by the user, so that the split screen area of the video data can be readjusted based on the adjusted weight value, and further the split screen area for displaying the video data can be adjusted along with the attention of the user to the video data, thereby being beneficial to improving the video monitoring efficiency and user experience.
In some embodiments, the receiving module is further configured to receive a full screen viewing instruction generated by a user clicking/touching the split screen area, the display module 93 is further configured to display the video data displayed in the split screen area clicked/touched by the user in a full screen based on the full screen viewing instruction, and the weight value adjusting module is further configured to add a preset weight increment value to a weight value corresponding to the video data of the image pickup device displayed in the split screen area clicked/touched by the user when a duration of full screen display of the video data displayed in the split screen area clicked/touched by the user exceeds a preset duration threshold.
Different from the embodiment, the full-screen viewing instruction generated by clicking/touching the split-screen area by the user is received, so that the video data displayed in the split-screen area clicked/touched by the user is displayed in a full-screen manner based on the full-screen viewing instruction, if the duration of the full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds the preset duration threshold, the preset weight increment value is added to the weight value corresponding to the video data of the camera device displayed in the split-screen area clicked/touched by the user, and then the weight value corresponding to the video data can be adjusted according to the attention of the user in the process that the user views the video data, so that the split-screen area of the video data can be readjusted based on the adjusted split-screen area, and then the split-screen area for displaying the video data can be adjusted according to the attention of the user to the video data, the video monitoring efficiency and the user experience are improved.
In some embodiments, the receiving module is further configured to receive a sorting adjustment instruction generated by dragging the split screen area by the user, the display terminal 90 further includes a target video determining module configured to take video data displayed in the split screen area dragged by the user as a target video based on the sorting adjustment instruction, and the display terminal 90 further includes a split screen area adjusting module configured to re-determine the split screen area for displaying the target video, set a weight value corresponding to the target video as a preset identifier, and record a display position of the split screen area for displaying the target video.
Different from the embodiment, the method and the device can support the user to adjust the split screen area displayed by the target video in a mode of manually dragging the split screen area, so that the user can quickly position the video data shot by the camera device which is focused, and the video monitoring efficiency is further improved.
In some embodiments, the determining module 92 further includes a screening submodule configured to screen video data with a weight value being a preset identifier from the video data of the plurality of image capturing devices, and the determining submodule is further configured to determine a split screen area for displaying the screened video data based on the recorded display position, and the determining submodule is further configured to determine a split screen area for displaying the remaining video data in the remaining split screen area based on a weight value corresponding to the remaining video data, where the larger the weight value corresponding to the remaining video data is, the earlier the position of the split screen area for displaying the remaining video data in the remaining split screen area is.
In some embodiments, the number of the video data of the plurality of image capturing devices is greater than the number of the plurality of split screen areas included in the display interface, the receiving module is further configured to receive a display switching instruction generated by a user sliding the display interface, and the display module 93 is further configured to switch the video data displayed in the split screen areas in the display interface based on the display switching instruction.
Referring to fig. 10, fig. 10 is a schematic diagram of a frame of an embodiment of a display terminal 100 according to the present application. The display terminal 100 includes a memory 110, a processor 120, and a human-computer interaction circuit 130, the memory 110 and the human-computer interaction circuit 130 are coupled to the processor 120, and the memory 110, the processor 120, and the human-computer interaction circuit 130 are operable to implement the steps of any of the above-mentioned split-screen display method embodiments.
Specifically, the processor 120 is configured to control itself, the memory 110 and the human-computer interaction circuit 130 to implement the steps in any of the split-screen display method embodiments described above. Processor 120 may also be referred to as a CPU (Central processing Unit). The processor 120 may be an integrated circuit chip having signal processing capabilities. The Processor 120 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, processor 120 may be commonly implemented by multiple integrated circuit chips.
In this embodiment, the processor 120 is configured to obtain video data and weight information of a plurality of camera devices, where the display terminal 100 includes a display interface, the display interface includes a plurality of split screen regions, the processor 120 is further configured to determine, based on the weight information of the plurality of camera devices, a split screen region corresponding to the video data of each camera device, and the processor 120 is further configured to control the human-computer interaction circuit 130 to respectively display the video data of the plurality of camera devices in the split screen regions corresponding to the split screen regions. In one implementation scenario, the display terminal 100 includes a mobile terminal, a microcomputer.
In the above scheme, the display terminal 100 acquires video data and weight information of a plurality of image pickup devices, and the display terminal 100 includes a display interface, the display interface includes a plurality of split screen regions, thereby determining a split screen area corresponding to the video data of each of the plurality of image pickup devices based on the weight information of the plurality of image pickup devices, and then the video data of a plurality of camera devices are respectively displayed in the corresponding split screen areas, so that the video data of a plurality of camera devices can be simultaneously displayed on the display interface of the display terminal 100, which is beneficial to improving the viewing speed of the video data, meanwhile, the split screen area corresponding to the video data of each camera device is determined based on the weight information of the plurality of camera devices, so that a user can firstly view interested video data or focused video data, the video data can be quickly positioned, and the video monitoring efficiency is improved.
In some embodiments, the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a split-screen preview instruction input by a user.
In some embodiments, the display interface is provided with a split-screen preview button, and the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a split-screen preview instruction generated by clicking/touching the split-screen preview button by a user. In one implementation scenario, the display terminal 100 runs a display application, the display application is used for providing a display interface, and the split-screen preview button is disposed on a home page of the display application.
In some embodiments, the weight information includes a weight value corresponding to video data of the image pickup device, the processor 120 is further configured to parse the weight information to obtain the weight value corresponding to the video data, and the processor 120 is further configured to determine a split screen area for displaying the video data in the plurality of split screen areas based on the weight value corresponding to the video data, where the larger the weight value corresponding to the video data is, the earlier the position of the split screen area for displaying the video data in the plurality of split screen areas is.
In some embodiments, the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a full-screen viewing instruction generated by clicking/touching the split-screen area by the user, and the processor 120 is further configured to add a preset weight increment value to a weight value corresponding to video data of the image pickup device displayed in the split-screen area clicked/touched by the user.
Different from the embodiment, a full-screen viewing instruction generated by clicking/touching the split screen area by the user is received, so that a preset weight increment value is added to a weight value corresponding to video data of the camera device displayed in the split screen area clicked/touched by the user, and then the weight value corresponding to the video data can be adjusted according to the attention of the user in the process of viewing the video data by the user, so that the split screen area of the video data can be readjusted based on the adjusted weight value, and further the split screen area for displaying the video data can be adjusted along with the attention of the user to the video data, thereby being beneficial to improving the video monitoring efficiency and user experience.
In some embodiments, the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a full-screen viewing instruction generated by a user clicking/touching the split-screen area, the processor 120 is further configured to control the human-computer interaction circuit 130 to display video data displayed in the split-screen area clicked/touched by the user in a full-screen manner based on the full-screen viewing instruction, and the processor 120 is further configured to add a preset weight increment value to a weight value corresponding to video data of the image pickup device displayed in the split-screen area clicked/touched by the user when a duration of full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds a preset duration threshold.
Different from the embodiment, the full-screen viewing instruction generated by clicking/touching the split-screen area by the user is received, so that the video data displayed in the split-screen area clicked/touched by the user is displayed in a full-screen manner based on the full-screen viewing instruction, if the duration of the full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds the preset duration threshold, the preset weight increment value is added to the weight value corresponding to the video data of the camera device displayed in the split-screen area clicked/touched by the user, and then the weight value corresponding to the video data can be adjusted according to the attention of the user in the process that the user views the video data, so that the split-screen area of the video data can be readjusted based on the adjusted split-screen area, and then the split-screen area for displaying the video data can be adjusted according to the attention of the user to the video data, the video monitoring efficiency and the user experience are improved.
In some embodiments, the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a sorting adjustment instruction generated by dragging the split screen area by a user, the processor 120 is further configured to take video data displayed by the split screen area dragged by the user as a target video based on the sorting adjustment instruction, and the processor 120 is further configured to re-determine the split screen area for displaying the target video, set a weight value corresponding to the target video as a preset identifier, and record a display position of the split screen area for displaying the target video.
Different from the embodiment, the method and the device can support the user to adjust the split screen area displayed by the target video in a mode of manually dragging the split screen area, so that the user can quickly position the video data shot by the camera device which is focused, and the video monitoring efficiency is further improved.
In some embodiments, the processor 120 is further configured to screen video data with a weight value being a preset identifier from among the video data of the plurality of image capturing devices, the processor 120 is further configured to determine a split screen area for displaying the screened video data based on the recorded display position, and the processor 120 is further configured to determine a split screen area for displaying the remaining video data in the remaining split screen area based on a weight value corresponding to the remaining video data, where the larger the weight value corresponding to the remaining video data is, the earlier the position of the split screen area for displaying the remaining video data in the remaining split screen area is.
In some embodiments, the processor 120 is further configured to control the human-computer interaction circuit 130 to receive a display switching instruction generated by sliding the display interface by a user, and the processor 120 is further configured to control the human-computer interaction circuit 130 to switch video data displayed in a split-screen area in the display interface based on the display switching instruction.
Referring to fig. 11, a schematic diagram of a memory device 1100 according to an embodiment of the present application is shown. The memory device 1100 stores program instructions 1110 capable of being executed by the processor, the program instructions 1110 being for implementing the steps in any of the split screen display method embodiments described above.
According to the scheme, the video data of the plurality of camera devices can be simultaneously displayed on the display interface of the display terminal, the viewing speed of the video data is favorably improved, meanwhile, the split screen area corresponding to the video data of each camera device is determined based on the weight information of the plurality of camera devices, the user can firstly view the interested video data or the important concerned video data, the video data can be quickly positioned, and the video monitoring efficiency is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. A split-screen display method, comprising:
the method comprises the steps that a display terminal obtains video data and weight information of a plurality of camera devices, wherein the display terminal comprises a display interface, and the display interface comprises a plurality of split screen areas;
determining a split screen area corresponding to video data of each camera device based on the weight information of the plurality of camera devices;
and respectively displaying the video data of the plurality of camera devices in the corresponding split screen areas.
2. The split-screen display method according to claim 1, wherein before the display terminal acquires video data and weight information of a plurality of image pickup devices, the method further comprises:
and receiving a split-screen preview instruction input by a user.
3. The screen-splitting display method of claim 2, wherein the display interface is provided with a screen-splitting preview button, and the receiving of the screen-splitting preview instruction input by the user comprises:
and receiving a split screen preview instruction generated by clicking/touching the split screen preview button by the user.
4. The split-screen display method according to claim 3, wherein a display application runs on the display terminal, the display application is used for providing the display interface, and the split-screen preview button is arranged on a home page of the display application; and/or the presence of a gas in the gas,
the display terminal comprises a mobile terminal and a microcomputer.
5. The screen division display method according to claim 1, wherein the weight information includes a weight value corresponding to video data of the image pickup device; the determining, based on the weight information of the plurality of image pickup devices, split screen region weight information corresponding to the video data of each of the image pickup devices includes:
analyzing the weight information to obtain a weight value corresponding to the video data;
determining a split screen area for displaying the video data in the plurality of split screen areas based on a weighted value corresponding to the video data, wherein the larger the weighted value corresponding to the video data is, the earlier the position of the split screen area for displaying the video data in the plurality of split screen areas is.
6. The split-screen display method of claim 5, further comprising:
receiving a full screen viewing instruction generated by clicking/touching the split screen area by a user;
and adding a preset weight increment value to a weight value corresponding to the video data of the camera device displayed in the split screen area clicked/touched by the user.
7. The split-screen display method of claim 5, further comprising:
receiving a full screen viewing instruction generated by clicking/touching the split screen area by a user;
based on the full-screen viewing instruction, displaying the video data displayed in the split-screen area clicked/touched by the user in a full screen mode;
and if the duration of the full-screen display of the video data displayed in the split-screen area clicked/touched by the user exceeds a preset duration threshold, adding a preset weight increment value to the weight value corresponding to the video data of the camera device displayed in the split-screen area clicked/touched by the user.
8. The split-screen display method of claim 5, further comprising:
receiving a sequencing adjustment instruction generated by dragging the split screen area by a user;
based on the sequencing adjustment instruction, taking video data displayed in a split screen area dragged by a user as a target video;
and re-determining a split screen area for displaying the target video, setting a weight value corresponding to the target video as a preset identifier, and recording a display position of the split screen area for displaying the target video.
9. The split-screen display method according to claim 8, wherein the determining, among the plurality of split-screen areas, a split-screen area for displaying the video data based on the weight value corresponding to the video data comprises:
screening the video data with the weighted values as the preset marks from the video data of the plurality of camera devices;
determining a split screen area for displaying the screened video data based on the recorded display position;
and determining a split screen area for displaying the remaining video data in the remaining split screen area based on a weighted value corresponding to the remaining video data, wherein the larger the weighted value corresponding to the remaining video data is, the earlier the position of the split screen area for displaying the remaining video data in the remaining split screen area is.
10. The split-screen display method according to claim 1, wherein the number of video data of the plurality of camera devices is larger than the number of the plurality of split-screen areas included in the display interface, the method further comprising:
receiving a display switching instruction generated by sliding the display interface by a user;
and switching the video data displayed in the split screen area in the display interface based on the display switching instruction.
11. A display terminal, comprising a memory, a processor and a human-computer interaction circuit, wherein the memory and the human-computer interaction circuit are coupled to the processor, and the memory, the processor and the human-computer interaction circuit are operable to implement the split-screen display method according to any one of claims 1 to 10.
12. A storage device storing program instructions executable by a processor to implement the split screen display method of any one of claims 1 to 10.
CN201911305117.XA 2019-12-17 2019-12-17 Split screen display method, display terminal and storage device Active CN111064930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911305117.XA CN111064930B (en) 2019-12-17 2019-12-17 Split screen display method, display terminal and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911305117.XA CN111064930B (en) 2019-12-17 2019-12-17 Split screen display method, display terminal and storage device

Publications (2)

Publication Number Publication Date
CN111064930A true CN111064930A (en) 2020-04-24
CN111064930B CN111064930B (en) 2021-08-03

Family

ID=70302141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911305117.XA Active CN111064930B (en) 2019-12-17 2019-12-17 Split screen display method, display terminal and storage device

Country Status (1)

Country Link
CN (1) CN111064930B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333502A (en) * 2020-07-30 2021-02-05 深圳Tcl新技术有限公司 Intelligent television display method, intelligent television and computer readable storage medium
CN112422914A (en) * 2020-11-17 2021-02-26 珠海大横琴科技发展有限公司 Display method and device of monitoring video, electronic equipment and storage medium
WO2022160624A1 (en) * 2021-02-01 2022-08-04 深圳创维-Rgb电子有限公司 Video transmission system and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121201A (en) * 1989-08-25 1992-06-09 Daido Denki Kogyo Kabushiki Kaisha Method and apparatus for detecting the number of persons
WO2004055990A2 (en) * 2002-12-12 2004-07-01 Scientific-Atlanta, Inc. Data enhanced multi-media system for an external device
TW201447724A (en) * 2013-03-15 2014-12-16 Oplink Communications Inc Interfaces for security system control
CN104574359A (en) * 2014-11-03 2015-04-29 南京邮电大学 Student tracking and positioning method based on primary and secondary cameras
CN105554471A (en) * 2016-01-20 2016-05-04 浙江宇视科技有限公司 Video sequence intelligent adjusting method and device based on event statistics
CN106231259A (en) * 2016-07-29 2016-12-14 北京小米移动软件有限公司 The display packing of monitored picture, video player and server
US20170078767A1 (en) * 2015-09-14 2017-03-16 Logitech Europe S.A. Video searching for filtered and tagged motion
CN106791646A (en) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 Show the method and device of video information
CN109862323A (en) * 2019-02-20 2019-06-07 北京旷视科技有限公司 Playback method, device and the processing equipment of multi-channel video

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121201A (en) * 1989-08-25 1992-06-09 Daido Denki Kogyo Kabushiki Kaisha Method and apparatus for detecting the number of persons
WO2004055990A2 (en) * 2002-12-12 2004-07-01 Scientific-Atlanta, Inc. Data enhanced multi-media system for an external device
TW201447724A (en) * 2013-03-15 2014-12-16 Oplink Communications Inc Interfaces for security system control
CN104574359A (en) * 2014-11-03 2015-04-29 南京邮电大学 Student tracking and positioning method based on primary and secondary cameras
US20170078767A1 (en) * 2015-09-14 2017-03-16 Logitech Europe S.A. Video searching for filtered and tagged motion
CN105554471A (en) * 2016-01-20 2016-05-04 浙江宇视科技有限公司 Video sequence intelligent adjusting method and device based on event statistics
CN106231259A (en) * 2016-07-29 2016-12-14 北京小米移动软件有限公司 The display packing of monitored picture, video player and server
CN106791646A (en) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 Show the method and device of video information
CN109862323A (en) * 2019-02-20 2019-06-07 北京旷视科技有限公司 Playback method, device and the processing equipment of multi-channel video

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333502A (en) * 2020-07-30 2021-02-05 深圳Tcl新技术有限公司 Intelligent television display method, intelligent television and computer readable storage medium
CN112333502B (en) * 2020-07-30 2024-07-05 深圳Tcl新技术有限公司 Smart television display method, smart television and computer-readable storage medium
CN112422914A (en) * 2020-11-17 2021-02-26 珠海大横琴科技发展有限公司 Display method and device of monitoring video, electronic equipment and storage medium
WO2022160624A1 (en) * 2021-02-01 2022-08-04 深圳创维-Rgb电子有限公司 Video transmission system and method

Also Published As

Publication number Publication date
CN111064930B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN111064930B (en) Split screen display method, display terminal and storage device
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN112069358B (en) Information recommendation method and device and electronic equipment
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
US20230111361A1 (en) Method and apparatus for processing video data
CN112954199B (en) Video recording method and device
CN110891191B (en) Material selection method, device and storage medium
CN112911147B (en) Display control method, display control device and electronic equipment
CN112887618B (en) Video shooting method and device
CN104754223A (en) Method for generating thumbnail and shooting terminal
KR102128955B1 (en) Method for generating a spin image and apparatus thereof
CN112637500A (en) Image processing method and device
CN113709368A (en) Image display method, device and equipment
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN113891018A (en) Shooting method and device and electronic equipment
CN112396675A (en) Image processing method, device and storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant