CN115277650A - Screen projection display control method, electronic equipment and related device - Google Patents

Screen projection display control method, electronic equipment and related device Download PDF

Info

Publication number
CN115277650A
CN115277650A CN202210819678.7A CN202210819678A CN115277650A CN 115277650 A CN115277650 A CN 115277650A CN 202210819678 A CN202210819678 A CN 202210819678A CN 115277650 A CN115277650 A CN 115277650A
Authority
CN
China
Prior art keywords
target
key points
area
video content
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210819678.7A
Other languages
Chinese (zh)
Other versions
CN115277650B (en
Inventor
曹佳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Happycast Technology Co Ltd
Original Assignee
Shenzhen Happycast Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Happycast Technology Co Ltd filed Critical Shenzhen Happycast Technology Co Ltd
Priority to CN202210819678.7A priority Critical patent/CN115277650B/en
Publication of CN115277650A publication Critical patent/CN115277650A/en
Application granted granted Critical
Publication of CN115277650B publication Critical patent/CN115277650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • H04L65/4038Arrangements for multi-party communication, e.g. for conferences with floor control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of computer technology and internet, in particular to a screen projection display control method, electronic equipment and a related device, which are applied to the electronic equipment, wherein the electronic equipment realizes screen projection display through a large screen, the large screen comprises a first area and a second area, and the method comprises the following steps: displaying video content of a target object in the first area; displaying the text content of the target object in the second area; determining N first key points of the video content, wherein N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one; and realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points. By adopting the embodiment of the application, the video content and the text content can be played simultaneously, the conference effect is improved, and the user experience is also improved.

Description

Screen projection display control method, electronic equipment and related device
Technical Field
The application relates to the technical field of computer technology and internet, in particular to a screen projection display control method, electronic equipment and a related device.
Background
With the rapid development of internet technology, web conferences also become an important means for people to communicate, and a web conference system is a multimedia conference platform taking a network as a medium, so that users can break through the limitation of time and regions and realize the face-to-face communication effect through the internet. However, the current conference can only play a single content, and the conference effect is limited to a certain extent.
Disclosure of Invention
The embodiment of the application provides a screen projection display control method, electronic equipment and a related device, which can play video content and text content at the same time, improve conference effect and improve user experience.
In a first aspect, an embodiment of the present application provides a screen projection display control method, which is applied to an electronic device, where the electronic device implements screen projection display through a large screen, where the large screen includes a first area and a second area, and the method includes:
displaying video content of a target object in the first area;
displaying the text content of the target object in the second area;
determining N first key points of the video content, wherein N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one;
and realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
In a second aspect, an embodiment of the present application provides a screen projection display control apparatus, which is applied to an electronic device, where the electronic device implements screen projection display through a large screen, where the large screen includes a first area and a second area, and the apparatus includes: a display unit, a determination unit, and a playback unit, wherein,
the display unit is used for displaying the video content of the target object in the first area; and displaying the text content of the target object in the second area;
the determining unit is configured to determine N first keypoints of the video content, where N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one by one;
and the playing unit is used for realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the screen-casting display control method, the electronic device, and the related apparatus described in the embodiments of the present application are applied to an electronic device, where the electronic device implements screen-casting display through a large screen, the large screen includes a first area and a second area, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, where N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points correspond to the N second key points one to one, and synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a screen projection display control method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another screen projection display control method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a block diagram of functional units of a projection display control device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may include other steps or elements not listed or inherent to such process, method, article, or apparatus in one possible example.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The electronic device related to the embodiment of the application may include: the server can be a cloud server or an edge server.
The local devices related to the embodiments of the present application may include, but are not limited to: examples of such processing devices include, but are not limited to, smart phones, tablets, smart robots, smart projectors, conferencing devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, as well as various forms of User Equipment (UE), mobile Stations (MS), terminal equipment (terminal device), and the like.
In this embodiment of the present application, the first local device, the second local device, and the third local device include the above local devices.
In the embodiment of the application, a cloud conference, that is, a web conference, refers to a conference group created by a conference creator at a cloud end through a first local device, and the cloud end (which is simply understood that a dedicated cloud application is installed at a cloud server side, and can support the conference group created at the cloud server side and provide a conference service).
In the embodiment of the application, the speaker is a participant who obtains the control authority of the conference desktop of the cloud conference through the second local device, and the original function of the product is that only a single user controls the conference desktop at the same time. The speaker may also be the master.
In this embodiment of the application, the shared content refers to content information uploaded to a cloud space of the cloud conference by a participant of the cloud conference through a third local device, such as a file (e.g., various file office files, a CAD drawing file, an audio file, a video file), a document (e.g., a PPT), a split-screen image of a user local device, a screen recording content, and the like. The shared screen projection display can be controlled to be a final screen recording file.
In the embodiment of the application, a conference desktop (such as a master cloud desktop) at least displays at least one shared content of a cloud conference, wherein a single speaker can input explanation information for one or more shared contents, and the single shared content can be input by one or more speakers to input the explanation information.
Referring to fig. 1, fig. 1 is a schematic flowchart of a screen projection display control method provided in an embodiment of the present application, and as shown in the figure, the screen projection display control method is applied to an electronic device, where the electronic device implements screen projection display through a large screen, and the large screen includes a first area and a second area, and the screen projection display control method includes:
101. and displaying the video content of the target object in the first area.
In a specific implementation, the target object may be a product, an object, a task, a person, and the like, which is not limited herein. The large screen may include at least one of: an electronic whiteboard, a screen, a curtain, a virtual screen, etc., which are not limited herein, for example, the large screen may be a screen of a participant, and for example, the large screen may be a screen of a speaker.
In the embodiment of the application, the electronic device can realize the screen projection display function through a large screen, and the large screen can comprise a first area and a second area. The first region may be used to display video content and the second region may be used to display corresponding textual content. For example, video content of the target object may be displayed in the first region.
102. And displaying the text content of the target object in the second area.
In a specific implementation, the video content and the text content of the target object may have a corresponding relationship, for example, the video content may be a product introduction of a product a in a video form, and the text content may be a product introduction of a product a in a text content form. The textual content may include at least one of: PPT, word, PDF, etc., without limitation.
103. Determining N first key points of the video content, wherein N is an integer larger than 1.
In specific implementation, N first keypoints of the video content may be determined, where N is an integer greater than 1, and the N first keypoints may be marked by themselves or by default in the system, or the video content may be marked at preset time intervals to obtain N first keypoints, where the preset time intervals may be preset or by default in the system.
104. And configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one.
In a specific implementation, the N key points may correspond to time nodes, or may also include some key words of the video picture at corresponding positions, so that the N second key points in the text content may be determined based on the time nodes or the key words, and the N first key points and the N second key points are in one-to-one correspondence.
105. And realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
In specific implementation, because the first key points and the second key points have a one-to-one correspondence relationship, and then the synchronous playing effect between the video content and the text content is determined based on the mapping relationship, that is, the synchronous playing of the video content and the text content can be realized according to the N first key points and the N second key points.
Optionally, the method may further include the following steps:
a1, acquiring target voice information;
a2, extracting keywords from the target voice information to obtain target keywords;
a3, positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one of the N second key points;
a4, determining a target first key point corresponding to the target second key point;
a5, positioning the video content according to the target first key point to obtain a positioning position;
and A6, jumping to the positioning position to play the video content.
In a specific implementation, the target voice information may be voice information of a speaker, for example, the speaker may introduce a product and record a sound at the same time.
Specifically, in the embodiment of the present application, target voice information may be obtained, keyword extraction may be performed on the target voice information to obtain a target keyword, text content is located according to the target keyword to obtain a target second key point, the target key point is one of the N second key points, a target first key point corresponding to the target second key point is determined based on a mapping relationship between the first key point and the second key point, video content is located according to the target first key point to obtain a location position, and video content is played by jumping to the location position.
In the embodiment of the application, the content positioning is realized through voice recognition, for example, a voice signal is received, keyword extraction is performed, a control instruction corresponding to the keyword is obtained, the position where the content needs to be displayed is positioned, and the cloud conference efficiency is greatly improved.
Optionally, the method may further include the following steps:
b1, acquiring a selection instruction of the video content;
b2, responding to the selection instruction to obtain a target video frame, and acquiring a reference first key point corresponding to the target video frame;
b3, determining a reference second key point according to the reference first key point;
b4, jumping the text content to a corresponding position according to the reference second key point, and displaying the text content of the corresponding position.
In the specific implementation, a speaker can select video content, that is, a selection instruction of the video content can be obtained, a target video frame is obtained in response to the selection instruction, a reference first key point corresponding to the target video frame is obtained, a reference second key point can be determined according to the reference first key point based on a mapping relation between the first key point and the second key point, finally, the text content can be skipped to a corresponding position according to the reference second key point, the text content at the corresponding position is displayed, synchronous skipping and synchronous playing between the text content and the video content can be guaranteed, and user experience can be improved.
Optionally, in the step B1, the obtaining of the selection instruction of the video content may include the following steps:
b11, displaying N first key points;
b12, selecting a first key point i, wherein the first key point i is one of the N first key points i;
and B13, generating the selecting instruction, wherein the selecting instruction is used for selecting the video content corresponding to the first key point i.
In specific implementation, N first keypoints can be displayed, for example, each keypoint can correspond to one keypoint identifier, and further, the keypoint identifiers of the N first keypoints can be displayed, a first keypoint i is selected, the first keypoint i is one of the N first keypoints i, that is, one keypoint identifier can be selected as the first keypoint i, and then a selection instruction is generated, and the selection instruction is used for selecting video content corresponding to the first keypoint i, so that rapid video positioning can be realized.
Of course, each first keypoint may also correspond to a keyword, and the selection of the first keypoint i is realized by selecting the keyword.
For example, in the embodiment of the present application, the presentation timings of the video content and the text content may perform timing synchronization, that is, a presentation progress synchronization mechanism may be configured in advance, for example, a mechanism that uses the presentation progress of a static file as a reference, associates a video or a key node of an operating state, or only predicts based on an AI may be used. Specifically, for example, assuming that a first area in a large screen is a content page of a PPT document of a target product uploaded by a speaker, a second area is a recorded video of the target product uploaded by the speaker, the PPT has 20 pages, the recorded video of the target product can be divided into 20 key nodes according to the 20-page presentation sequence, when the speaker controls to turn a page to page 5, a cloud space should search a pre-configured corresponding relationship according to the page number to locate a fifth key node corresponding to the recorded video, and when the speaker explains the page 5 content, the following centralized interaction mode is supported to synchronize corresponding video content:
(1) The playing progress of the recorded video in the second area is automatically positioned to a fifth key node, and the speaker can click a playing button and play a corresponding part of the video;
(2) After the speaker turns to page 5, the video in the second area plays corresponding video content according to the positioned fifth key node, and after the video content is finished, the speaker further explains the video content;
(3) The speaker turns the page to page 5 and starts to explain, the voice information of the speaker is collected and intelligently analyzed in the cloud space, and when a video voice playing demonstration instruction is detected, the playing is automatically controlled.
Optionally, the method may further include the following steps:
c1, acquiring target attribute parameters of a display screen of a user side;
c2, determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and C3, determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
In this embodiment, the target attribute parameter may include at least one of the following: size, type, material, etc., without limitation. In specific implementation, a mapping relationship between a preset attribute parameter and a first size parameter may be pre-stored, and then, the first size parameter of the first area of the large screen corresponding to the target attribute parameter may be determined based on the mapping relationship.
In specific implementation, a target attribute parameter of a display screen of a user side can be acquired, a first size parameter of a first area of a large screen is determined according to the target attribute parameter, and finally a second size parameter of a second area of the large screen is determined according to the first size parameter and the target attribute parameter, namely the large screen needs to be divided into 2 areas, namely the first area and the second area, two areas display different contents, one area displays video contents, and the other area displays text contents.
For example, the large screen may be adapted to a display area based on the ratio of the screen projected by the mobile phone, the remaining display areas are adapted to display the screen projected PPT, and one of the other client devices may be selected to perform screen projected display, or dynamically adapted to automatically push streams according to the explained screen of the user.
Optionally, the method may further include the following steps:
d1, receiving a full-screen display instruction of the first area;
d2, maximizing the first area, and hiding the second area;
and D3, displaying the video content of the target object through the first area.
In a specific implementation, a full-screen display instruction for the first area may also be received, the first area is maximized, and the second area is hidden, where the hiding may include at least one of: minimize handling of the second area, underlying the first area, close the second area, etc., and are not limited herein. Furthermore, the video content of the target object can be displayed through the first area, so that full-screen display of the video content can be realized.
Of course, the second area may also realize full-screen display, and the principle thereof is similar to the full-screen display of the first area, and is not described herein again.
Optionally, in the step D3, displaying the video content of the target object through the first area may include the following steps:
d31, acquiring target environment parameters;
d32, determining a target optimization factor corresponding to the target environment parameter;
d33, acquiring a first screen projection display parameter;
d34, optimizing the first projection screen display parameter according to the target optimization factor to obtain a target first projection screen display parameter;
and D35, displaying the video content of the target object through the first area according to the target first screen projection display parameter.
In an embodiment of the present application, the target environment parameter may include at least one of: ambient brightness, distance, user vision parameters, user's angle with respect to the electronic whiteboard, etc., without limitation. In specific implementation, a mapping relation between a preset environment parameter and an optimization factor can be stored in advance, and the value range of the optimization factor can be-01 to 0.1. The first screen projection display parameter may be a default screen projection display parameter, and the screen projection display parameter may include at least one of: resolution, font size, frame rate, display color, sharpness, etc., without limitation.
In specific implementation, a target environment parameter may be obtained, a target optimization factor corresponding to the target environment parameter is determined according to a mapping relationship between a preset environment parameter and an optimization factor, a first screen projection display parameter is obtained, and the first screen projection display parameter is optimized according to the target optimization factor to obtain a target first screen projection display parameter, where the target first screen projection display parameter = (1 + target optimization factor) × the first screen projection display parameter. Namely, the influence of the angle, the ambient light and the distance of a user can be considered while the display is based on the color differentiation standard; the displayed color parameters are finely adjusted based on the angle, the ambient light and the distance, so that the display effect of the projection screen is improved, and the user experience is improved.
In the concrete implementation, the screen projection parameters can be adjusted based on the position, distance and angle of the user, so that the screen projection effect meets the requirements of the user better, and in addition, the screen projection parameters can be dynamically adjusted by combining with the vision parameters of the user.
Optionally, the method may further include the following steps:
e1, acquiring a first playing parameter of the video content and a second playing parameter of the text content;
e2, determining the target speed of the speaker;
e3, determining a target adjusting parameter corresponding to the target speed;
e4, adjusting the first playing parameter according to the target adjusting parameter to obtain a target first playing parameter;
e5, adjusting the second playing parameter according to the target first playing parameter to obtain a target second playing parameter;
and E6, playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
In specific implementation, a first playing parameter of the video content and a second playing parameter of the text content can be obtained, the first playing parameter corresponds to the second playing parameter, and synchronous playing can be achieved between the first playing parameter and the second playing parameter. The first playback parameter may include at least one of: frame rate, resolution, font size, sharpness, etc., and are not limited herein. The second playback parameter may include at least one of: playback rate, resolution, font size, sharpness, etc., and are not limited herein.
Furthermore, a target speech speed of the speaker can be determined, a mapping relation between preset speech and the adjusting parameter can be stored in advance, further, a target adjusting parameter corresponding to the target speech speed can be determined based on the mapping relation, then, the first playing parameter is adjusted according to the target adjusting parameter to obtain a target first playing parameter, the second playing parameter is adjusted according to the target first playing parameter to obtain a target second playing parameter, further, the playing synchronism between the target first playing parameter and the target second playing parameter can be ensured, then, the video content can be played according to the target first playing parameter, the text content can be played according to the target second playing parameter, and thus, the synchronous playing between the video content and the text content can be realized.
Of course, automatic skipping can also be realized based on the speed of speech and the position of the keyword of the user during playing.
Optionally, the first area may jump to the second area, that is, the second area is displayed in a full screen manner, in a specific implementation, the first area jumps to the second area based on the authority of the user, the identity of the user needs to be verified before the jump, the jump is performed directly when the identity verification passes, otherwise, the jump is not performed, for example, the service may be a member service, and through the setting, on one hand, the security may be ensured, and on the other hand, a member function may also be implemented.
Optionally, in the embodiment of the present application, a recording function may also be implemented, and in the recording process, the video may be compressed and/or cut to obtain more detailed video content.
Optionally, the large screen may include at least one screen, for example, the first area and the second area may correspond to 2 screens, and multiple screens may also be implemented, where contents between different screens may be spliced or displayed separately.
It can be seen that the screen-casting display control method described in the embodiment of the present application is applied to an electronic device, the electronic device implements screen-casting display through a large screen, the large screen includes a first area and a second area, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points correspond to the N second key points one-to-one, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, the video content and the text content can be respectively displayed in 2 divided areas under one large screen, synchronous playing between the two is implemented through the key points, and further the video content and the text content can be simultaneously played, a conference effect is enhanced, and user experience is also enhanced.
Consistent with the embodiment shown in fig. 1, please refer to fig. 2, where fig. 2 is a schematic flowchart of another control method for screen projection display provided in the embodiment of the present application, and the method is applied to an electronic device, where the electronic device implements screen projection display through a large screen, and the large screen includes a first area and a second area, and the control method for screen projection display includes:
201. and acquiring target attribute parameters of a display screen of the user side.
202. And determining a first size parameter of a first area of the large screen according to the target attribute parameter.
203. And determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
204. And displaying the video content of the target object in the first area.
205. And displaying the text content of the target object in the second area.
206. Determining N first key points of the video content, wherein N is an integer larger than 1.
207. And configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one.
208. And realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
The detailed description of the steps 201 to 208 may refer to the corresponding steps of the screen display control method described in fig. 1, and will not be repeated herein.
It can be seen that the screen-casting display control method described in the embodiment of the present application is applied to an electronic device, the electronic device implements screen-casting display through a large screen, the large screen includes a first area and a second area, a target attribute parameter of a display screen of a user side is obtained, a first size parameter of the first area of the large screen is determined according to the target attribute parameter, a second size parameter of the second area of the large screen is determined according to the first size parameter and the target attribute parameter, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points correspond to the N second key points one-to one, the video content and the text content are synchronously played according to the N first key points and the N second key points, 2 areas are divided to respectively display the video content and the text content under one large screen, and the key points are synchronized with each other, thereby enabling the video content and the video content to be played simultaneously, and the conference effect of a user is improved.
In accordance with the foregoing embodiment, please refer to fig. 3, where fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in an embodiment of the present application, the electronic device implements a projection display through a large screen, the large screen includes a first area and a second area, and the programs include instructions for performing the following steps:
displaying video content of a target object in the first area;
displaying the text content of the target object in the second area;
determining N first key points of the video content, wherein N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one;
and realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
Optionally, the program further includes instructions for performing the following steps:
acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one of the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the target first key point to obtain a positioning position;
and jumping to the positioning position to play the video content.
Optionally, the program further includes instructions for performing the following steps:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and acquiring a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and jumping the text content to a corresponding position according to the reference second key point, and displaying the text content of the corresponding position.
Optionally, in the aspect of the instruction for acquiring the video content, the program includes an instruction for performing the following steps:
displaying N first key points;
selecting a first key point i, wherein the first key point i is one of the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
Optionally, the program further includes instructions for performing the following steps:
acquiring a target attribute parameter of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
Optionally, the program further includes instructions for performing the following steps:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
displaying the video content of the target object through the first area.
Optionally, the program further includes instructions for performing the following steps:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speed of a speaker;
determining a target adjusting parameter corresponding to the target speed;
adjusting the first playing parameter according to the target adjusting parameter to obtain a target first playing parameter;
adjusting the second playing parameter according to the target first playing parameter to obtain a target second playing parameter;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
It can be seen that, in the electronic device described in this embodiment of the present application, the electronic device implements screen-on-screen display through a large screen, the large screen includes a first area and a second area, the video content of the target object is displayed in the first area, the text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points correspond to the N second key points one-to-one, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, the video content and the text content can be respectively displayed in 2 areas under one large screen, and synchronous playing between the two is implemented through the key points, so that the video content and the text content can be simultaneously played, a conference effect is improved, and user experience is also improved.
Fig. 4 is a block diagram of functional units of a screen projection display control apparatus 400 according to an embodiment of the present application, where the apparatus 400 is applied to an electronic device, the electronic device implements screen projection display through a large screen, the large screen includes a first area and a second area, and the apparatus 400 includes: a display unit 401, a determination unit 402, and a playback unit 403, wherein,
the display unit 401 is configured to display the video content of the target object in the first area; and displaying the text content of the target object in the second area;
the determining unit 402 is configured to determine N first keypoints of the video content, where N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
the playing unit 403 is configured to implement synchronous playing of the video content and the text content according to the N first key points and the N second key points.
Optionally, the apparatus 400 is further specifically configured to:
acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one of the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the target first key point to obtain a positioning position;
jumping to the positioning position to play the video content.
Optionally, the apparatus 400 is further specifically configured to:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and acquiring a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and skipping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
Optionally, in terms of the obtaining of the selection instruction for the video content, the apparatus 400 is specifically configured to:
displaying N first key points;
selecting a first key point i, wherein the first key point i is one of the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
Optionally, the apparatus 400 is further specifically configured to:
acquiring target attribute parameters of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
Optionally, the apparatus 400 is further specifically configured to:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
displaying the video content of the target object through the first area.
Optionally, the apparatus 400 is further specifically configured to:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speed of a speaker;
determining a target adjusting parameter corresponding to the target speed;
adjusting the first playing parameter according to the target adjusting parameter to obtain a target first playing parameter;
adjusting the second playing parameter according to the target first playing parameter to obtain a target second playing parameter;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
It can be seen that the screen-casting display control device described in this embodiment of the present application is applied to an electronic device, the electronic device implements screen-casting display through a large screen, the large screen includes a first area and a second area, the video content of a target object is displayed in the first area, the text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points correspond to the N second key points one to one, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, the video content and the text content can be respectively displayed in 2 divided areas under one large screen, synchronous playing between the two is implemented through the key points, and further the video content and the text content can be simultaneously played, a conference effect is improved, and user experience is also improved.
It can be understood that the functions of each program module of the projection display control device in this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps of the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, the memory including: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A screen projection display control method is applied to electronic equipment, the electronic equipment realizes screen projection display through a large screen, the large screen comprises a first area and a second area, and the method comprises the following steps:
displaying video content of a target object in the first area;
displaying the text content of the target object in the second area;
determining N first key points of the video content, wherein N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one to one;
and realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
2. The method of claim 1, further comprising:
acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one of the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the target first key point to obtain a positioning position;
jumping to the positioning position to play the video content.
3. The method of claim 1, further comprising:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and acquiring a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and skipping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
4. The method according to claim 3, wherein the obtaining of the selection instruction for the video content comprises:
displaying N first key points;
selecting a first key point i, wherein the first key point i is one of the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
5. The method according to any one of claims 1-4, further comprising:
acquiring target attribute parameters of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
6. The method according to any one of claims 1-4, further comprising:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
displaying the video content of the target object through the first region.
7. The method according to any one of claims 1-4, further comprising:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speed of a speaker;
determining a target adjusting parameter corresponding to the target speed;
adjusting the first playing parameter according to the target adjusting parameter to obtain a target first playing parameter;
adjusting the second playing parameter according to the target first playing parameter to obtain a target second playing parameter;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
8. A screen projection display control device is applied to electronic equipment, the electronic equipment realizes screen projection display through a large screen, the large screen comprises a first area and a second area, and the device comprises: a display unit, a determination unit, and a playback unit, wherein,
the display unit is used for displaying the video content of the target object in the first area; and displaying the text content of the target object in the second area;
the determining unit is configured to determine N first keypoints of the video content, where N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points correspond to the N second key points one by one;
and the playing unit is used for realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
9. An electronic device, comprising a processor, a memory to store one or more programs and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202210819678.7A 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device Active CN115277650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210819678.7A CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210819678.7A CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Publications (2)

Publication Number Publication Date
CN115277650A true CN115277650A (en) 2022-11-01
CN115277650B CN115277650B (en) 2024-01-09

Family

ID=83765015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210819678.7A Active CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Country Status (1)

Country Link
CN (1) CN115277650B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084386A (en) * 2008-03-24 2011-06-01 姜旻秀 Keyword-advertisement method using meta-information related to digital contents and system thereof
WO2015192631A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Video conferencing system and method
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium
CN109819301A (en) * 2019-02-20 2019-05-28 广东小天才科技有限公司 Video playing method and device, terminal equipment and computer readable storage medium
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
US20200359064A1 (en) * 2019-05-08 2020-11-12 Oath Inc. Generating augmented videos
CN112004138A (en) * 2020-09-01 2020-11-27 天脉聚源(杭州)传媒科技有限公司 Intelligent video material searching and matching method and device
CN112231498A (en) * 2020-09-29 2021-01-15 北京字跳网络技术有限公司 Interactive information processing method, device, equipment and medium
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device
CN112883235A (en) * 2021-03-11 2021-06-01 深圳市一览网络股份有限公司 Video content searching method and device, computer equipment and storage medium
CN112954380A (en) * 2021-02-10 2021-06-11 北京达佳互联信息技术有限公司 Video playing processing method and device
CN112990191A (en) * 2021-01-06 2021-06-18 中国电子科技集团公司信息科学研究院 Shot boundary detection and key frame extraction method based on subtitle video
CN113206970A (en) * 2021-04-16 2021-08-03 广州朗国电子科技有限公司 Wireless screen projection method and device for video communication and storage medium
CN114218413A (en) * 2021-11-24 2022-03-22 星际互娱(北京)科技股份有限公司 Background system for video playing and video editing

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084386A (en) * 2008-03-24 2011-06-01 姜旻秀 Keyword-advertisement method using meta-information related to digital contents and system thereof
WO2015192631A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Video conferencing system and method
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium
CN109819301A (en) * 2019-02-20 2019-05-28 广东小天才科技有限公司 Video playing method and device, terminal equipment and computer readable storage medium
US20200359064A1 (en) * 2019-05-08 2020-11-12 Oath Inc. Generating augmented videos
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
CN112004138A (en) * 2020-09-01 2020-11-27 天脉聚源(杭州)传媒科技有限公司 Intelligent video material searching and matching method and device
CN112231498A (en) * 2020-09-29 2021-01-15 北京字跳网络技术有限公司 Interactive information processing method, device, equipment and medium
CN112990191A (en) * 2021-01-06 2021-06-18 中国电子科技集团公司信息科学研究院 Shot boundary detection and key frame extraction method based on subtitle video
CN112954380A (en) * 2021-02-10 2021-06-11 北京达佳互联信息技术有限公司 Video playing processing method and device
CN112883235A (en) * 2021-03-11 2021-06-01 深圳市一览网络股份有限公司 Video content searching method and device, computer equipment and storage medium
CN113206970A (en) * 2021-04-16 2021-08-03 广州朗国电子科技有限公司 Wireless screen projection method and device for video communication and storage medium
CN114218413A (en) * 2021-11-24 2022-03-22 星际互娱(北京)科技股份有限公司 Background system for video playing and video editing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHAOHUI CHAOHUI LÜ: "Research on Audio-Video Synchronization of Sound and Text Messages", 2013 SIXTH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN *
庞春梗;: "三视融合,三窗同屏:聋生信息化无障碍教学资源开发与应用研究――基于"课件+手语+字幕"的聋生混合教学实践探索", 现代职业教育, no. 12 *
王慧君;郭楠;张粉粉;: "微视频字幕呈现方式对学习效果影响的实证研究", 数字教育, no. 05 *

Also Published As

Publication number Publication date
CN115277650B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
EP3236345A1 (en) An apparatus and associated methods
CN112188267B (en) Video playing method, device and equipment and computer storage medium
EP3024223B1 (en) Videoconference terminal, secondary-stream data accessing method, and computer storage medium
CN111654715A (en) Live video processing method and device, electronic equipment and storage medium
KR20220148915A (en) Audio processing methods, apparatus, readable media and electronic devices
CN113518232A (en) Video display method, device, equipment and storage medium
CN114630057B (en) Method and device for determining special effect video, electronic equipment and storage medium
CN105094523A (en) 3D animation display method and apparatus
CN113886612A (en) Multimedia browsing method, device, equipment and medium
US11665406B2 (en) Verbal queries relative to video content
US20190129683A1 (en) Audio app user interface for playing an audio file of a book that has associated images capable of rendering at appropriate timings in the audio file
CN114095793A (en) Video playing method and device, computer equipment and storage medium
CN115277650B (en) Screen-throwing display control method, electronic equipment and related device
WO2023182937A2 (en) Special effect video determination method and apparatus, electronic device and storage medium
CN113411532B (en) Method, device, terminal and storage medium for recording content
KR102576977B1 (en) Electronic device for providing interactive education service, and operating method thereof
CN115037905A (en) Screen recording file processing method, electronic equipment and related products
CN112601170B (en) Sound information processing method and device, computer storage medium and electronic equipment
CN114968159A (en) Display control method, electronic equipment and related product
CN110390087A (en) A kind of image processing method and device applied to PowerPoint
CN112672089B (en) Conference control and conference participation method, conference control and conference participation device, server, terminal and storage medium
KR101726844B1 (en) System and method for generating cartoon data
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
CN109739373B (en) Demonstration equipment control method and system based on motion trail

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant