CN113794923B - Video processing method, device, electronic equipment and readable storage medium - Google Patents

Video processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113794923B
CN113794923B CN202111091200.9A CN202111091200A CN113794923B CN 113794923 B CN113794923 B CN 113794923B CN 202111091200 A CN202111091200 A CN 202111091200A CN 113794923 B CN113794923 B CN 113794923B
Authority
CN
China
Prior art keywords
video
image sequence
input
window
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111091200.9A
Other languages
Chinese (zh)
Other versions
CN113794923A (en
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202111091200.9A priority Critical patent/CN113794923B/en
Publication of CN113794923A publication Critical patent/CN113794923A/en
Priority to PCT/CN2022/118527 priority patent/WO2023040844A1/en
Application granted granted Critical
Publication of CN113794923B publication Critical patent/CN113794923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a video processing method and device, and belongs to the field of video processing. The method comprises the following steps: receiving a first input from a user to a first video processing device; in response to the first input, displaying a first sequence of video images acquired by a first camera of the first video processing device on a video preview interface; under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence; wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.

Description

Video processing method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the field of video processing, and particularly relates to a video processing method, a video processing device, electronic equipment and a readable storage medium.
Background
With the development of 5G technology, the speed and quality of video real-time transmission are greatly improved. Camera imaging quality of electronic devices is also increasing. Video recording and editing with electronic devices is now a product development trend.
At present, video editing is mainly performed on a PC (personal computer ) side, video recorded on a mobile phone is edited on the PC side through professional video editing software, but editing operation is complex, threshold is higher for common users, and the method is more suitable for professional user operation.
Disclosure of Invention
An object of an embodiment of the present application is to provide a video processing method, apparatus, electronic device, and readable storage medium, which can solve the problems of complex operation and high operation difficulty in video editing in the related art.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input from a user to a first video processing device;
In response to the first input, displaying a first sequence of video images acquired by a first camera of the first video processing device on a video preview interface;
Under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence;
wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.
In a second aspect, an embodiment of the present application provides a first video processing apparatus, including:
a first receiving module for receiving a first input from a user to a first video processing device;
The first display module is used for responding to the first input and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
The generation module is used for generating a target video according to the first video image sequence and the second video image sequence under the condition that a second video image sequence acquired by a second camera of the second video processing device is displayed on a video preview interface;
wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the video image sequences acquired by the cameras of different video processing devices can be displayed on the video preview interface, video editing is performed according to the first video image sequences and the second video image sequences acquired by the cameras of different video processing devices, and a target image comprising at least one frame of first video image and at least one frame of second video image is generated, wherein the at least one frame of first video image is from the first video image sequence generated by the first video processing device, and the at least one frame of second video image is from the second video image sequence generated by the second video processing device. According to the video processing method provided by the embodiment of the application, under the condition that the video image sequences respectively collected by the cameras of different video processing devices are displayed on the video preview interface, video editing is carried out on the different video image sequences generated by the different video processing devices, so that a target video is generated, professional video editing software is not required, and the operation difficulty and complexity of video editing are reduced.
Drawings
FIG. 1 is one of the flowcharts of a video processing method provided in an embodiment of the present application;
FIG. 2A is a schematic diagram of a video processing interface according to an embodiment of the present application;
FIG. 2B is a second schematic diagram of a video processing interface according to an embodiment of the present application;
FIG. 2C is a third schematic diagram of a video processing interface according to an embodiment of the present application;
FIG. 2D is a diagram illustrating a video processing interface according to an embodiment of the present application;
FIG. 2E is a diagram of a video processing interface according to an embodiment of the present application;
FIG. 2F is a diagram illustrating a video processing interface according to an embodiment of the present application;
FIG. 2G is a diagram of a video processing interface according to an embodiment of the present application;
FIG. 2H is a schematic illustration of a video processing interface provided by an embodiment of the present application;
FIG. 2I is a diagram illustrating a video processing interface according to an embodiment of the present application;
FIG. 2J is a schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2K is a diagram illustrating an exemplary video processing interface according to an embodiment of the present application;
FIG. 2L is a diagram illustrating a video processing interface according to an embodiment of the present application;
FIG. 2M is a diagram illustrating a thirteenth video processing interface according to an embodiment of the present application;
FIG. 3 is a second flowchart of a video processing method according to an embodiment of the present application;
FIG. 4A is a diagram illustrating a fourteen-step video processing interface according to an embodiment of the present application;
FIG. 4B is a diagram of a video processing interface according to an embodiment of the present application;
fig. 5 is a block diagram of a video processing apparatus provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a video processing method according to an embodiment of the present application is shown, and the method may be applied to a first video processing apparatus, and the method may specifically include the steps of:
step 101, a first input from a user to a first video processing device is received.
Illustratively, the first input may include, but is not limited to: a click input of a user to the first video processing device is either a voice instruction input by the user or a specific gesture input by the user; the specific determination may be determined according to actual use requirements, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
Step 102, in response to the first input, displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface.
For example, the video preview interface may be a shooting preview interface of video, and the first video image sequence may be a video image sequence acquired by the first camera in real time, and the video recorded in real time may be displayed frame by frame on the shooting preview interface.
The video preview interface may be a playing preview interface of the generated video, and the first video image sequence may be a video image sequence pre-acquired by the first camera, and the recorded video may be displayed frame by frame in the playing preview interface.
Step 103, under the condition that a second video image sequence acquired by a second camera of the second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence.
In this embodiment, the second video image sequence is similar to the first video image sequence, and may be a video image sequence of a video recorded in real time, or may be a video image sequence of a video already recorded.
The second video processing device is communicatively connected to the first video processing device, and therefore, the first video processing device may display not only the first video image sequence of the first video processing device but also the second video image sequence generated by another video processing device (here, the second video processing device) on the video preview interface.
For convenience of understanding, the following description will be given by taking a video image sequence as a video image sequence acquired by each camera in real time, and a video preview interface as a shooting preview interface, and when the video image sequence is a video image sequence of a recorded video, the execution principle of the method in the embodiment of the present application is similar, so that no detailed description is given.
In addition, the application does not limit the display order of the first video image sequence and the second video image sequence in the video preview interface.
Wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.
In the embodiment of the application, the video image sequences acquired by the cameras of different video processing devices can be displayed on the video preview interface, video editing is performed according to the first video image sequences and the second video image sequences acquired by the cameras of different video processing devices, and a target image comprising at least one frame of first video image and at least one frame of second video image is generated, wherein the at least one frame of first video image is from the first video image sequence generated by the first video processing device, and the at least one frame of second video image is from the second video image sequence generated by the second video processing device. According to the video processing method provided by the embodiment of the application, under the condition that the video image sequences respectively collected by the cameras of different video processing devices are displayed on the video preview interface, video editing is carried out on the different video image sequences generated by the different video processing devices, so that a target video is generated, professional video editing software is not required, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window for displaying the first video image sequence and a first sub-window for displaying the second video image sequence.
Optionally, different windows in the video preview interface are used for displaying video data recorded in real time by different video processing devices.
The shooting preview interface may include a main window and at least one sub window, optionally, the number of the sub windows may be multiple, so as to display video image sequences recorded in real time by multiple other video processing devices (i.e., other video processing devices) communicatively connected to the first video processing device;
In addition, the main window and the sub-window in the shooting preview interface display video image sequences of different video processing apparatuses, and the different sub-windows may also display video image sequences of different video processing apparatuses.
The following description will take, as an example, a first video processing device as a mobile phone M, and other video processing devices communicatively connected to the first video processing device include a mobile phone a, a mobile phone B, and a mobile phone C.
In addition, the video image sequences displayed by different windows in the shooting preview interface can be video image sequences recorded in real time by different mobile phones at different shooting angles under the same shooting scene, or can be video image sequences recorded in multiple positions under different shooting scenes. The shooting scene may be a sports scene, such as a basketball, a football, or the like.
The video processing device in the embodiment of the application can be a mobile terminal, including a mobile phone, a tablet and the like. For example, as shown in fig. 2A, taking a mobile phone as an example, in the video recording interface 11 of the mobile phone camera, after a user may enter a multi-camera video clip mode through a double-finger zoom screen (for example, zoom to the minimum), so as to display a shooting preview interface, where, as shown in fig. 2B, through the zoom operation, the shooting preview interface is divided into a plurality of windows, where one larger window is a main window 21, and an image acquired by a camera of a first video processing apparatus (for example, a mobile phone M) of the method according to the embodiment of the present application is displayed by default; the remaining smaller windows are sub-windows, and fig. 2B shows 8 sub-windows (e.g., one of the sub-windows 22), defaults to a ready-to-connect state, indicated by a plus sign, so that in the ready-to-connect state, the mobile phone M has not yet been connected to another video processing device (e.g., a mobile phone) for multi-video editing.
The connection manner between different video processing devices may be WiFi (wireless network), bluetooth, etc., and the description will be given hereinafter by taking WiFi connection as an example, and the communication connection manner of bluetooth, etc. is the same, and will not be described again.
For example, when the mobile phone M establishes WiFi connection with the mobile phone a, the mobile phone B, and the mobile phone C, the video data recorded by the mobile phone a, the mobile phone B, and the mobile phone C in real time can be transmitted to the mobile phone M in real time.
Illustratively, the multi-machine video clip mode requires a plurality of handsets to work cooperatively, so the first video processing device first needs to connect to the plurality of handsets. In the shooting preview interface shown in fig. 2C, i.e., the multi-camera video clip mode interface, the user can display the mobile phone search interface shown in fig. 2D by clicking any of the sub-windows, here sub-window 22.
After the user clicks any sub-window, the mobile phone M may establish a wifi hotspot and wait for other mobile phones to connect. When other mobile phones are also in the multi-machine video clip mode, nearby WiFi signals can be searched, wherein in the multi-machine video clip mode, if a sub-window is not clicked, nearby WiFi signals are searched, and if the sub-window is clicked, wiFi hotspots are established. The WiFi hotspot may be a password-less WiFi hotspot.
The method comprises the steps of performing finger zooming on a shooting preview interface to switch from a common video mode to a multi-video clip mode, for example, a multi-video clip mode interface of a mobile phone A shown in FIG. 2E, a multi-video clip mode interface of a mobile phone B shown in FIG. 2F, and a multi-video clip mode interface of a mobile phone C shown in FIG. 2G; the main window in each multi-machine video clip mode interface of the mobile phone A, the mobile phone B and the mobile phone C displays video content recorded by each mobile phone; the principles of fig. 2E, fig. 2F, and fig. 2G are similar to the multi-video clip mode interface of the mobile phone M shown in fig. 2C, and are not repeated here.
Note that, since the same reference numerals in fig. 2A to 2M denote the same objects, the same reference numerals in different drawings will not be explained one by one, and explanation with reference to other drawings is sufficient.
In this embodiment, the hotspot information of the WiFi hotspot may carry some parameter information of the mobile phone M, for example, may include parameters for indicating that the mobile phone M is in a multi-video clip mode, and identification information of the mobile phone M.
In this embodiment, when two mobile phones are connected by WiFi hotspots in the multi-machine video clip mode for the first time, the two mobile phones can be connected by WiFi in an authentication mode, and if the two mobile phones are not connected by WiFi hotspots in the multi-machine video clip mode for the first time, the authentication is not needed, and the two mobile phones can be directly connected by WiFi.
In this embodiment, when two mobile phones are first connected through WiFi hotspots in a multi-machine video clip mode, the following manner may be implemented: for other mobile phones (mobile phones without establishing WiFi hotspots except the mobile phone M) in the multi-machine video clip mode, after searching for the WiFi hotspots, if the hotspot information of the searched WiFi hotspots is found to be the WiFi hotspots in the multi-machine video clip mode, the WiFi hotspots can be actively connected, and the authentication mode is entered. Specifically, the other mobile phones can send the information of the own mobile phone to the mobile phone M (also called a master mobile phone) in an authentication mode, and wait for the connection application of the master mobile phone. Once other mobile phones request to connect to the WiFi hotspot of the master mobile phone, as shown in fig. 2D, the master mobile phone may display the identification information of each mobile phone requesting authentication in the mobile phone search interface.
In this embodiment, wiFi connection is performed at a WiFi hotspot of the two mobile phones in the multi-machine video clip mode, which may be implemented by the following manner: for other mobile phones in the multi-machine video clip mode (mobile phones without establishing WiFi hotspots outside the mobile phone M), after searching for the WiFi hotspots, if the hotspot information of the searched WiFi hotspots is found to be the WiFi hotspots in the multi-machine video clip mode, the WiFi hotspots can be actively connected, so that the mobile phone M establishes WiFi connection with other mobile phones.
Optionally, the method according to the embodiment of the application may include: displaying at least one device identification indicating a video processing device communicatively coupled to the first video processing device; then, receiving a third input by the user of a target device identification of the at least one device identification; and finally, in response to the third input, displaying a third video image sequence acquired by a third camera of a third video processing device in a second sub-window, wherein the third video processing device identifies the indicated video processing device for the target device.
Illustratively, the third input may include, but is not limited to: clicking input of a target device identification by a user, or inputting a voice instruction by the user, or inputting a specific gesture by the user; the specific determination may be determined according to actual use requirements, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
The device identifier may be the identification information of each mobile phone displayed on the mobile phone search interface shown in fig. 2D, where the device identifier is a mobile phone a, a mobile phone B, and a mobile phone C; and, the mobile phone searching interface can also display the control 31 of the mobile phone M, so when displaying the identification information of each mobile phone requesting authentication, the identification information of each mobile phone can be displayed in the mobile phone searching interface according to the distance and the azimuth of each mobile phone relative to the mobile phone M, wherein the mobile phone C is nearest to the mobile phone M, and the mobile phone B is furthest to the mobile phone a.
In the mobile phone search interface of the mobile phone M, the user can drag the device identifier of the mobile phone (illustrated as the mobile phone C) to be connected into the to-be-connected area 32, and then click the "connect" control 33, so that the mobile phone M can issue a connection request to the mobile phone C (e.g. the third video processing device here). The mobile phone C can receive the connection application of the mobile phone M, so that 'consent' is clicked in a multi-video clip mode of the mobile phone C, and the mobile phone C and the mobile phone M become two-way communication; in addition, the mobile phone C does not need to make a multi-machine video clip, and only needs to provide a video image sequence recorded in real time at the shooting angle and transmit the video image sequence to the mobile phone M, so that the mobile phone C can exit the multi-machine video clip mode and only display a preview picture of the video shot by the mobile phone C on the mobile phone C.
Similarly, the mobile phone M may also connect with other mobile phones for video recording and editing by clicking other sub-windows in fig. 2C, for example, the mobile phone M and the mobile phone B also establish WiFi connection, and the mobile phone M and the mobile phone a also establish WiFi connection. Then the mobile phone A, the mobile phone B and the mobile phone C which establish WiFi connection with the mobile phone M can transmit the videos recorded in real time to the mobile phone M through the WiFi connection in real time.
As shown in fig. 2H, a sub-window 22 (e.g., a second sub-window) in the multi-video clip mode interface of the mobile phone M is used for displaying a preview screen of a video recorded by the mobile phone C (e.g., a third video processing device); the sub-window 23 (e.g. a first sub-window) is used for displaying a preview picture of the video recorded by the mobile phone B (e.g. the second video processing device); the sub-window 24 is used for displaying a preview picture of the video recorded by the mobile phone A; the main window 21 is used for displaying a preview screen of the video recorded by the mobile phone M in an initial state.
For the embodiment of fig. 1, the method for displaying the video image sequence of the second video processing apparatus is similar to the method for displaying the video recorded by the camera of the mobile phone C in the sub-window 22 illustrated here, and will not be described in detail.
After the mobile phone M, the mobile phone a, the mobile phone B and the mobile phone C start recording, the three sub-windows and the main window display preview pictures of videos recorded in real time at each end.
In this example, the main window is used to display a preview picture of the video recorded by the first video processing device, that is, the mobile phone M in the initial state, and the sub-window is used to display a preview picture of the video recorded by other mobile phones in communication connection with the mobile phone M; in other embodiments, the main window may not initially display any video recorded by the device, that is, the preview image of the video recorded by the mobile phone M is also displayed in a sub-window.
In the embodiment of the application, the device identifier of the video processing device which is used for indicating the communication connection with the first video processing device is displayed, the third input of the user for the target device identifier in the device identifier is received, the third video image sequence acquired by the third camera of the video processing device indicated by the target device identifier is displayed in the sub-window in response to the third input, video recording in a multi-machine-position mode is realized, video images in different machine positions are clipped to generate target videos, the function of clipping the videos while displaying the recorded videos can be realized by connecting the plurality of video processing devices with the first video processing device in a communication way, and the video displayed in the main window is taken as the target video (what you see is what you get), so that the complexity of video editing is simplified.
In addition, in the embodiment of the application, by connecting the first video processing device with at least one other video processing device (i.e. the video processing device other than the first video processing device) in a communication way, the preview images of the videos recorded by the other video processing devices in real time can be displayed in the sub-windows of the video preview interface of the first video processing device, and the preview images of the videos recorded by the different video processing devices are displayed in different sub-windows, so that the different videos acquired by the different other video processing devices can be distinguished through the different sub-windows, and the function of editing the videos while recording the videos can be realized by the mutual communication of a plurality of video processing devices on the basis of the video recording function, the video editing of the video images of the main window is realized in a visible and obtained way, and the complexity of video editing is simplified; in addition, by displaying the preview picture of the video recorded in real time by the first video processing device in the main window, the initial video segment of the target video obtained by clipping is derived from the video data recorded by the first video processing device, and the first video processing device is used as a control device for video clipping, so that the target video obtained by the main window is more in line with the video clipping scene.
Alternatively, the main window may be larger in size relative to the sub-window and located near the center of the video preview interface, thereby facilitating the user's browsing of the video content displayed in the main window.
Optionally, the method according to the embodiment of the present application may further include: determining the relative shooting azimuth of the third camera according to the image content of the third video image sequence and the first video image sequence, wherein the relative shooting azimuth is the shooting azimuth of the third camera relative to the first camera; and then, determining the target display position of the second sub-window according to the relative shooting azimuth.
Alternatively, as shown in fig. 2H, the user may change the position of the sub-window by dragging the sub-window with video content displayed to other sub-windows (with or without video content displayed), where the position of the sub-window is determined according to the relative shooting directions.
For example, in fig. 2H, the sub-window 22 displays a video picture of the mobile phone a, the sub-window 23 displays a video picture of the mobile phone B, and the main window 21 displays a video picture of the current mobile phone, that is, the mobile phone M. Assuming that the mobile phone a is located at the left side of the mobile phone M to photograph the photographed person, and the mobile phone B is located at the right side of the mobile phone M to photograph the photographed person, the mobile phone M photographs the front side of the photographed person, and the user can move the sub-window 23 to any small window on the right side of the main window 21, so as to conveniently represent the photographing orientation of the camera corresponding to the video picture in the sub-window 23, and the photographing orientation of the camera corresponding to the video picture in the main window 21.
In the above embodiment, the user may trigger the display of the interface of fig. 2D by clicking on the sub-window 22 in fig. 2C, so that the connection of the first video processing apparatus and the third video processing apparatus is achieved by operating on fig. 2D; after the first video processing device is communicatively coupled to the third video processing device, the sub-window 22 (second sub-window) is configured to display a third sequence of video images captured by a third camera of the third video processing device. In this embodiment, the shooting orientation of the third camera relative to the first camera may be determined according to the image content of the third video image sequence and the image content of the first video image sequence, so as to adjust the second sub-window to a corresponding position for display. The position of the sub-window can be automatically or manually adjusted according to the relative shooting direction.
In the embodiment of the application, the shooting azimuth of the third camera relative to the first camera can be determined according to the image content of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that a user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relation between the second sub-window and the main window in the video preview interface.
For example, when the mobile phone M shoots the shooting object right in front of the shooting object, and the mobile phone C shoots the shooting object in the northwest corner of the mobile phone M, the video content shot by the mobile phone C can be displayed in the sub-window 22.
Optionally, the video processing method of the embodiment of the present application further includes: receiving a fourth input of a user to the video preview interface; when the first video image sequence and the second video image sequence are video image sequences acquired in real time in a video recording process, responding to the fourth input, and controlling the first camera and the second camera to stop acquiring video images; and stopping playing the first video and the second video in response to the fourth input when the first video image sequence is a video image in the recorded first video and the second video image sequence is a video image in the recorded second video.
Illustratively, the fourth input may include, but is not limited to: clicking input of a user on a video preview interface, or inputting a voice instruction for the user, or inputting a specific gesture for the user; the specific determination may be determined according to actual use requirements, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
Taking the first video image sequence and the second video image sequence as video image sequences acquired in real time in the video recording process as an example for explanation.
Illustratively, as shown in fig. 2I, the main window 21 has a preset control 31, and the start and stop of video recording can be controlled by clicking on the preset control 31. Specifically, the mobile phone M can control the mobile phone a, the mobile phone B, and the mobile phone C connected with the mobile phone M by clicking the preset control 31, and the mobile phone M starts or ends recording; the mobile phone a, the mobile phone B and the mobile phone C can only control the start or end of video recording of the body mobile phone, as shown in fig. 2J, the conventional video recording interface of the mobile phone a has a control 41, and the user of the mobile phone a can start the mobile phone a to record video or end the video recording of the mobile phone a by clicking the control 41.
If any one of the mobile phones a, B, and C pauses video recording by clicking a video control (e.g., control 41 in fig. 2J) of the respective mobile phone, the master mobile phone, that is, the mobile phone M, can control the mobile phone that pauses recording to continue video recording.
In the embodiment of the application, through the fourth input of the video preview interface, when the video content displayed by the main window and the sub window is a video image sequence acquired in real time, the cameras corresponding to the windows are controlled to stop acquiring video images in response to the fourth input; under the condition that the video content displayed by the main window and the sub window is the video image in the recorded video, each window can be stopped from playing each recorded video, and the recording stopping or playing of the video images collected by the cameras is realized through one-key input of a video preview interface.
By way of example, by inputting a preset control in the main window, the first video processing device and other video processing devices communicatively connected with the first video processing device can be controlled to perform operations of starting or stopping (including suspending) video recording in a unified manner, and by one-key operation on the main window, unified control over multiple machines can be achieved.
Optionally, as shown in fig. 2I, the window for displaying video data of the other video processing apparatus (i.e. the other video processing apparatus communicatively connected to the first video processing apparatus) further has a preset control, which is used to control the video recording status of the other video processing apparatus, specifically including a recording status and a recording suspension status, specifically, the status of the control 32 in the sub-window 22 indicates that the mobile phone a is currently in the recording status, the status of the control 33 in the sub-window 23 indicates that the mobile phone B is currently in the recording suspension status, and the status of the control 34 in the sub-window 24 indicates that the mobile phone C is currently in the recording suspension status.
In the embodiment of the application, the window for displaying the video processing device is provided with the preset control, so that the recording state of the video processing device can be controlled through the preset control. And a user can intuitively know whether the video processing device is in a recording state or a recording suspension state through the state of the preset control.
Alternatively, in performing step 103, a second input to the first sub-window may be received by a user; then, in response to the second input, exchanging display content in the main window and the first sub-window; and finally, video stitching is carried out on at least one frame of first video image and at least one frame of second video image displayed in the main window, so that the target video is obtained.
Illustratively, the second input may include, but is not limited to: the click input of the user on the first sub-window is either a voice instruction input by the user or a specific gesture input by the user; the specific determination may be determined according to actual use requirements, which is not limited in the embodiment of the present application.
The second input may also be an input that causes an overlap of partial window areas between a sub-window and a main window, for example, an input that drags the first sub-window to the main window.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
For example, the video processing devices of the video data displayed by the first sub-window and the main window may be interchanged, and the first video segment and the second video segment may be spliced according to the display sequence of the main window to generate the target video, where the first video segment and the second video segment are respectively the video data from different video processing devices displayed by the main window.
The video processing device corresponding to the video data displayed in the main window before the first input is received may be another video processing device or may be the first video processing device, and the specific examples described above take the case that the video data of the first video processing device is displayed in the initial state of the main window as an example, but the present application is not limited thereto. The video data of the first video processing apparatus may also be displayed in the sub-window in an initial state, i.e. before the first input is received.
In this embodiment, the main window displays the video picture of the target video which is finally recorded and clipped, and the target video is obtained. While the sub-window displays the photographed pictures of other video processing apparatuses. In the video recording process, if the user of the mobile phone M finds that the video content of other mobile phones connected with the mobile phone M is more suitable and needs to be added into the target video, the corresponding sub-window can be dragged to the position of the main window, and the clipping of the video is triggered by dragging the sub-window to the main window. Illustratively, as shown in fig. 2K, the user of the mobile phone M drags the sub window 22 to the main window 21 in the arrow direction to switch the phone level.
Illustratively, for example, the input time point of the first input is t1, and before t1, the main window plays the video content recorded by the mobile phone M, such as the first video clip (including at least one frame of the first video image); after t1, the video content recorded by the mobile phone a corresponding to the dragged sub-window, for example, the second video clip (including at least one frame of the second video image) is played through the main window. Therefore, through the first input, the display content of the main window can be switched from the first video segment to the second video segment, and the display content of the main window is the target video finally recorded. And when the video is spliced, splicing the first video segment and the second video segment according to the time sequence to obtain the target video.
Of course, the target video may also be obtained through multiple first inputs, for example, when the user wants to add the video content displayed by one sub-window to the target video, the sub-window may be dragged to the main window, that is, the first input, so that in response to the first input, the video content displayed by the sub-window is added to the target video; if the user wishes to add the video content displayed by the other sub-window to the target video, then the first input of dragging the other sub-window to the main window is continued, such that in response to the first input, the video content displayed by the other sub-window is also added to the target video. When the last recorded mobile phone stops recording or the master mobile phone stops recording, the content displayed by the master window is stored in the master mobile phone, and the target video is obtained. For example, in a sports scene, the video recording can be performed by switching to a proper shooting angle at any time.
In this embodiment, after the drag operation shown in fig. 2K, the interface shown in fig. 2L is skipped, that is, the video content recorded by the first video processing apparatus (mobile phone M) after t1 is displayed in the sub-window 22, and the video content recorded by the mobile phone a after t1 is displayed in the main window 21; and finally, splicing at least two sections of videos to form the target video obtained through recording.
In the embodiment of the application, a first video image sequence acquired by a first camera of a first video processing device can be displayed in a main window of a video preview interface, and a second video image sequence acquired by a second camera of a second video processing device can be displayed in a first sub-window of the video preview interface; when video is recorded, the display contents of the main window and the first sub-window can be exchanged by performing a second input mode on the first sub-window, namely, the main window is switched and displayed as a video image sequence acquired by the second camera, so that the first sub-window is switched and displayed as a video image sequence acquired by the first camera, and as different video processing devices can shoot videos of the same scene from different angles, the position switching in the video recording process can be realized; the target video can be obtained based on the content displayed in the main window, specifically, videos displayed in the main window are spliced in sequence according to the display sequence, so that the target video is obtained, recording can be performed in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
In the embodiment of the application, video recording can be carried out by means of a plurality of video processing devices, video data recorded by different video processing devices in real time are displayed in different windows of a video preview interface, and in the video recording process, the position switching in the video recording process can be realized by dragging a child window to the input of a main window; when video is recorded, video data from different video processing devices, namely first video data and second video data, which are respectively displayed in the main window are spliced according to the display sequence of the main window, so that the same scene can be recorded based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved. The video content recorded by the video processing devices is displayed on the equipment of one video processing device in real time, and a user can switch the positions of the video recorders in real time by dragging different sub-windows to the main window, so that the operability of the user in the video recording process is improved.
Optionally, the method of the embodiment of the present application may further include: storing the target video and video data recorded by each video processing device under the condition that the video processing device corresponding to each window in the shooting preview interface stops video recording; and storing the corresponding time point of each video segment in the target video and the mapping relation between the corresponding time points in video data recorded by the video processing device.
After the first video processing device and other video processing devices in the embodiment of the application establish communication connection, after the other video processing devices begin recording, the first video processing device can receive video content recorded by the other video processing devices in real time, and store video data recorded by each video processing device and store the obtained target video under the condition that all video processing devices stop recording videos.
Optionally, the method according to the embodiment of the present application may further include: receiving a fifth input of a user to the target video; in response to the fifth input, displaying a video adjustment window including an adjustment control therein, at least one first video thumbnail and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of second video image, and the adjustment control is used for updating the video frame of the target video; then, receiving a sixth input of the user to the adjustment control; and responding to the sixth input, updating the display position of the adjusting control, and updating the video frame of the target video according to the updated display position of the adjusting control.
The specific implementation manner of the fifth input and the sixth input in the present embodiment and the seventh input and the eighth input in the following embodiments may refer to the above description of the related examples of the first input, and the principles are similar, and are not repeated here.
Illustratively, the target video may be stored in an album of the mobile phone M, and the user clicks the edit control (i.e., the fifth input) on the target video stored in the album. After clicking the edit control, the interface of the video adjustment window shown in fig. 2M of the target video may be entered.
Optionally, the video adjustment window includes a main playing progress bar of the target video, where the main playing progress bar includes a preset identifier 53, and the preset identifier 53 moves on the playing progress bar along with a change of a video playing progress in the video playing window.
The preset identifier in the application is used for indicating characters, symbols, images and the like of the information, and controls or other containers can be used as carriers for displaying the information, including but not limited to characters, symbols and images.
The video adjusting window comprises a sub-playing progress bar of each video segment in the target video, wherein movable adjusting controls are displayed at the spliced positions of different video segments.
Illustratively, as shown in fig. 2M, the video adjustment window includes a video play window 54, and the video play window 54 is used to display a picture of the target video; the main playing progress bar 52 is a progress bar of the video played in the video playing window 54, and the main playing progress bar 52 is provided with a preset identifier 53 moving along with playing time.
Further, as shown in fig. 2M, for example, the target video is formed by splicing a piece of video a, a piece of video B, and a piece of video C in this order. The video clip interface further includes a plurality of sub-playing progress bars above the main playing progress bar 52, specifically, the sub-playing progress bars of the video clips constituting the target video may be displayed in a plurality of rows in order of the playing time from front to back, where the sub-playing progress bars 61 of the video a, 62 of the video B, 63 of the video C are sequentially included; in addition, at the splicing position of different sub-playing progress bars, an adjusting control 51 can be further included, the adjusting control 51 can be understood as a fine-tuning control, and the time point of machine position switching in the progress bars of the complete target video can correspond to a movable adjusting control. For example, the recorded target video is composed of three segments of video a, video B and video C, and then includes two adjustment controls 51, one for adjusting the video frames at the splice of video a and video B, and the other for adjusting the video frames at the splice of video B and video C.
Alternatively, as shown in fig. 2M, dragging the preset mark 53 may control the playing progress of the video in the video playing window 54, and in addition, clicking the preset mark 53 may control the video in the video playing window 54 to pause playing or continue playing, where the display patterns of the preset mark 53 may be different in two states of pause playing and continue playing.
In the embodiment of the application, the preset mark on the main playing progress strip in the video clip interface can not only control the playing progress of the video in the video playing window by moving the mark; moreover, the playing state of the video in the video playing window can be changed through the input of the preset identifier.
Optionally, the method of the embodiment of the application can adjust the video frames at the splicing part through the adjusting control; thumbnail images of the video frames at the splicing position are respectively displayed on the left side and the right side of the adjusting control.
Illustratively, as shown in fig. 2M, two thumbnails may be displayed on both sides of the adjustment control 51 between video a and video B, specifically including: a thumbnail 71 of the last frame image of video a located above the sub-play progress bar 61, and a thumbnail 72 of the first frame image of video B located above the sub-play progress bar 62; in addition, two thumbnails of the video frames at another splice are also shown in fig. 2M, and will not be described here again.
Of course, in the example herein, the adjustment control 51 has not moved, and then after the adjustment control 51 has moved to the left or right in the direction of the arrow in fig. 2M, the position where the adjustment control 51 is stopped may correspond to a different video clip splice, and then thumbnails of two frames of images from different video clips at the splice are also displayed on the left and right sides of the adjustment control 51.
In the embodiment of the application, the thumbnails of the two frames of images at the joint of the two video clips are displayed, so that when a user moves the adjustment control to fine tune the target video, the user can judge whether the video picture is properly spliced or not by browsing the two thumbnails at the joint.
Illustratively, in FIG. 2M, the preset control 51 may be moved left or right in the direction of the arrow to trigger adjustment of the splice of different video segments in the target video, after which the save control 55 may be clicked to update the target video.
Illustratively, taking the sixth input of the adjustment control 51 between the two sub-playing progress bars of video a and video B in fig. 2M as an example, by moving the adjustment control 51 to the left, it is possible to reduce video a by several frames from the tail and to add video B by the same number of video frames at the head to achieve adjustment of the video frames at the splice of video a and video B.
Alternatively, the step of updating the video frame of the target video may be performed by at least one of: updating the spliced video frame of the first video image sequence; updating a splice starting video frame of the second video image sequence; adding or subtracting stitched video frames of the first sequence of video images; and adding or subtracting stitched video frames of the second sequence of video images. The spliced video frames represent video frames used for splicing target videos during switching.
For example, for the adjustment control 52 corresponding to the splice of the video a and the video B in fig. 2M, the length of the progress bar corresponding to the length of the left shift for 2s needs to be updated for the video frame of the splice end position of the video a (i.e. the first video image sequence here) in the target video, specifically, the spliced video frame of the video a is reduced, here, the video frame of the tail 2s length of the video a is reduced; and updating the video frames of the splicing starting position of the video B (namely the second video image sequence) in the target video, in particular, acquiring corresponding video frames from the original video to which the video B belongs and adding the corresponding video frames to the second video image sequence.
The left movement of the control 52 is described herein as an example, and when the control 52 is moved to the right, the method is similar and will not be described again.
Continuing with the above example of the adjustment control 51 between the two sub-playing progress bars of the video a and the video B in fig. 2M, by moving the adjustment control 51 to the left, the adjustment control 51 is moved close to the sub-playing progress bar of the video a, and is moved far away from the sub-playing progress bar of the video B, where a preset mapping relationship exists between the moving distance and the video frame number, the target frame number to be adjusted, for example, 3 frames, may be determined based on the moving distance, then 3 frames of video frames may be reduced at the tail of the video a, 3 frames of video frames may be added at the head of the video B, and the data source of the added 3 frames of video frames is the complete original video data recorded by the video processing device corresponding to the video B.
Optionally, after fine tuning by the adjustment control, for example, the adjustment control 51 in fig. 2M stays at the position corresponding to the 10 th s of the video a after moving, the 7 th to 10 th s video segments of the video a and the video segments (the 1 st to 3 rd video segments) of the head of the video B may be spliced and played through the video playing window 54.
In the embodiment of the application, when a user moves an adjusting control between two spliced video clips in a target video, not only can the splicing positions of the two spliced video clips be adjusted; and in addition, two video clips with the adjusted splicing positions can be played, so that a user can conveniently judge whether the updated splicing positions among different video clips are suitable or not by browsing the played video with the updated splicing positions.
Optionally, in the embodiment of the present application, when adjusting the splicing position of the target video, the spliced video frame may be previewed, and the adjustment result may be previewed, so as to ensure that the effect desired by the user may be achieved after the video is fine-tuned.
Optionally, after step 103, as shown in fig. 3, the method according to the embodiment of the present application may further include:
Step 201, receiving a seventh input of the user to the target video.
In the video editing of the present embodiment, the target video is taken as an example, and in other embodiments, the processing object may be other videos, for example, may be a video recorded by the first video processing device, the second video processing device, or other video processing devices, or other videos downloaded from the internet.
Step 202, in response to the seventh input, displaying a first video editing window of the target video on the first video processing device.
Step 203, receiving an eighth input from the user to the first video editing window.
Step 204, in response to the eighth input, updating the target video according to editing information, the editing information being determined according to the eighth input.
Step 205, the editing information is sent to a second video processing device, so that the second video processing device synchronously updates the target video according to the editing information.
For example, after generating the target video, the mobile M may send the target video to the mobile a, the mobile B, and the mobile C, so that the three mobile may also obtain the target video.
The connection between the mobile phone M and other video processing devices is similar to the above example, but in the triggering manner, the following manner may be adopted: as shown in fig. 4A, in the mobile phone M, when the user opens the target video in the album and clicks the multi-machine collaborative editing control 82, multi-machine synchronous editing can be performed on the target video in the window 81.
The interface shown in fig. 4A is also displayed on the mobile phone a, the mobile phone B, and the mobile phone C, and all the mobile phones connected successfully with the mobile phone M display the target video, and the editing options are the same. Fig. 4A illustrates various editing options, which are not described in detail herein.
For example, clicking on the "beauty" control, i.e., making an eighth input. It should be noted that, in order to avoid editing disorder caused by multiple times of editing operations on the same video, in the embodiment of the present application, different mobile phones edit the same video differently. After clicking one editing option, the mobile phone M can share editing information corresponding to the editing option to the mobile phone a, the mobile phone B and the mobile phone C.
Similarly, after the mobile phone a or the mobile phone B or the mobile phone C selects a certain editing option for editing the target video, the editing information corresponding to the editing option can be synchronously transmitted to the mobile phone M in real time, and the mobile phone M can synchronously share the received editing information with other mobile phones, so that the editing information among the four mobile phones is shared.
After the plurality of mobile phones and the mobile phone M are in communication connection, the mobile phone a can be used for video subtitle matching, the mobile phone B can be used for video adjusting filters, the mobile phone C can be used for editing the duration of the video, and the like.
Alternatively, after editing the target video, the mobile phone M may display the edited preview image in the window 81; in addition, since the other mobile phones also edit the target video, the window 81 can preview the video effect edited by the other mobile phones.
Optionally, in this embodiment, the mobile phone M stores the edited video by clicking the storage control in fig. 4A or fig. 4B; if the mobile phone A, the mobile phone B and the mobile phone C respectively click on the storage control, the stored video is synchronized to the mobile phone M.
In the embodiment of the application, the target video can be edited by a plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs to cooperate by a plurality of persons in the video editing process is met, and the editing efficiency is improved.
Optionally, when there is a conflict between editing information of different video processing devices (for example, the mobile phone a clips video frames of 1s to 5s of the target video, and the mobile phone M performs face beautifying operation on video frames of 1s to 10s, and obviously there is a conflict between the two editing information), the video processing devices can be prompted by the prompting information, and after one of the video processing devices is edited, the other video processing devices are prompted to perform corresponding editing.
In this embodiment, if the editing operation being performed by one mobile phone affects the editing operation of the other mobile phone, the marking and prompting may be performed. Such as the duration that handset a is editing a video, the time period that is marked deleted in the progress bar (optionally, in other embodiments, the added time period may also be included) is marked in the progress bar by a special color in the other handsets, which indicates that the video clip is cut off. After clicking the save control in fig. 4A in any one handset, the editing result is synchronized to the other handsets.
Optionally, if one mobile phone is already performing a certain editing function, the editing function of the other mobile phone is set to gray, so that the editing function corresponding to the gray control of the other user is prompted to be processed by the other mobile phone, and if the gray editing function control is clicked, the other user is prompted to use the editing function. As shown in fig. 4B, it is assumed that the editing function of the gray "music" control is performed by the mobile phone a, and the editing function of the "beauty" control is performed by the mobile phone B, so in fig. 4B on the mobile phone M side, the options of the two editing functions are gray, and the user cannot edit the target video by using the two editing functions of the mobile phone M, so that the problem of disturbance of editing information caused by editing the same editing function on the target video by different video processing devices can be avoided.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method.
Referring to fig. 5, a block diagram of a first video processing apparatus 300 of one embodiment of the present application is shown. The first video processing apparatus 300 includes:
A first receiving module 301, configured to receive a first input from a user to a first video processing apparatus;
A first display module 302, configured to display, in response to the first input, a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
The generating module 303 is configured to generate a target video according to the first video image sequence and the second video image sequence when the video preview interface displays the second video image sequence acquired by the second camera of the second video processing device;
wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.
In the embodiment of the application, the video image sequences acquired by the cameras of different video processing devices can be displayed on the video preview interface, video editing is performed according to the first video image sequences and the second video image sequences acquired by the cameras of different video processing devices, and a target image comprising at least one frame of first video image and at least one frame of second video image is generated, wherein the at least one frame of first video image is from the first video image sequence generated by the first video processing device, and the at least one frame of second video image is from the second video image sequence generated by the second video processing device. According to the video processing method provided by the embodiment of the application, under the condition that the video image sequences respectively collected by the cameras of different video processing devices are displayed on the video preview interface, video editing is carried out on the different video image sequences generated by the different video processing devices, so that a target video is generated, professional video editing software is not required, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window and a first sub-window, the main window is used for displaying the first video image sequence, and the first sub-window is used for displaying the second video image sequence;
The generating module 303 includes:
the first receiving sub-module is used for receiving a second input of a user to the first sub-window;
An exchange sub-module for exchanging display content in the main window and the first sub-window in response to the second input;
And the splicing sub-module is used for video splicing of at least one frame of first video image and at least one frame of second video image displayed in the main window to obtain the target video.
In this embodiment, when video is recorded, display contents of the main window and the first sub-window may be exchanged by performing a second input manner on the first sub-window, that is, the main window is switched to display a video image sequence collected by the second camera, so that the first sub-window is switched to display a video image sequence collected by the first camera, and since different video processing devices may perform video shooting of the same scene from different angles, position switching in the video recording process may be implemented; the target video can be obtained based on the content displayed in the main window, specifically, videos displayed in the main window are spliced in sequence according to the display sequence, so that the target video is obtained, recording can be performed in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
Optionally, the first video processing apparatus 300 further includes:
a second display module for displaying at least one device identifier for indicating a video processing device communicatively coupled to the first video processing device;
a second receiving module for receiving a third input by a user of a target device identification of the at least one device identification;
and the third display module is used for responding to the third input, displaying a third video image sequence acquired by a third camera of a third video processing device in a second sub-window, wherein the third video processing device is a video processing device indicated by the target device identification.
In the embodiment of the application, the device identifier of the video processing device which is used for indicating the communication connection with the first video processing device is displayed, the third input of the user for the target device identifier in the device identifier is received, the third video image sequence acquired by the third camera of the video processing device indicated by the target device identifier is displayed in the sub-window in response to the third input, video recording in a multi-machine-position mode is realized, video images in different machine positions are clipped to generate target videos, and the functions of displaying recorded videos and clipping videos at the same time can be realized by connecting a plurality of video processing devices with the first video processing device in a communication way, and the videos displayed in the main window are taken as the target videos (what you see is what you get), so that the complexity of video editing is simplified.
Optionally, the first video processing apparatus 300 further includes:
the first determining module is used for determining the relative shooting azimuth of the third camera according to the image content of the third video image sequence and the first video image sequence, wherein the relative shooting azimuth is the shooting azimuth of the third camera relative to the first camera;
and the second determining module is used for determining the target display position of the second sub-window according to the relative shooting azimuth.
In the embodiment of the application, the shooting azimuth of the third camera relative to the first camera can be determined according to the image content of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that a user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relation between the second sub-window and the main window in the video preview interface.
Optionally, the first video processing apparatus 300 further includes:
The third receiving module is used for receiving a fourth input of the user to the video preview interface;
The first control module is used for responding to the fourth input and controlling the first camera and the second camera to stop acquiring video images when the first video image sequence and the second video image sequence are video image sequences acquired in real time in the video recording process;
And the second control module is used for responding to the fourth input and stopping playing the first video and the second video when the first video image sequence is a video image in the recorded first video and the second video image sequence is a video image in the recorded second video.
In the embodiment of the application, through the fourth input of the video preview interface, when the video content displayed by the main window and the sub window is a video image sequence acquired in real time, the cameras corresponding to the windows are controlled to stop acquiring video images in response to the fourth input; under the condition that the video content displayed by the main window and the sub window is the video image in the recorded video, each window can be stopped from playing each recorded video, and the recording stopping or playing of the video images collected by the cameras is realized through one-key input of a video preview interface.
Optionally, the first video processing apparatus 300 further includes:
the fourth receiving module is used for receiving a fifth input of a user to the target video;
A fourth display module for displaying a video adjustment window in response to the fifth input, the video adjustment window including an adjustment control therein, at least one first video thumbnail and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of second video image, and the adjustment control is used for updating the video frame of the target video;
a fifth receiving module, configured to receive a sixth input from a user to the adjustment control;
and the first updating module is used for responding to the sixth input, updating the display position of the adjusting control, and updating the video frame of the target video according to the updated display position of the adjusting control.
In the embodiment of the application, after the clipped target video is generated, the video frames at the splicing position in the target video can be regulated, and a user can accurately regulate the position of the regulating control by browsing the thumbnail of the video frames at the splicing position of the video, so that the aim of accurately regulating the splicing position in the target video is fulfilled.
Optionally, the first updating module is further configured to perform at least one of the following steps:
Updating the spliced video frame of the first video image sequence;
Updating a splice starting video frame of the second video image sequence;
Adding or subtracting stitched video frames of the first sequence of video images;
and adding or subtracting stitched video frames of the second sequence of video images.
In this embodiment, the start video frame and the end video frame at the splicing position can be increased, reduced or updated according to specific needs of the user, so that the video frames at the splicing position are more suitable.
Optionally, the first video processing apparatus 300 further includes:
a sixth receiving module, configured to receive a seventh input of the user to the target video;
A fifth display module for displaying a first video editing window of the target video at the first video processing device in response to the seventh input;
A seventh receiving module, configured to receive an eighth input from a user to the first video editing window;
A second updating module for updating the target video according to editing information in response to the eighth input, the editing information being determined according to the eighth input;
And the sending module is used for sending the editing information to a second video processing device so that the second video processing device synchronously updates the target video according to the editing information.
In the embodiment of the application, the target video can be edited by a plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs to cooperate by a plurality of persons in the video editing process is met, and the editing efficiency is improved.
The video processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video processing device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device 2000, including a processor 2002, a memory 2001, and a program or an instruction stored in the memory 2001 and capable of being executed by the processor 2002, where the program or the instruction implements each process of the embodiment of the video processing method and achieves the same technical effect, and in order to avoid repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the user input unit 1007 is configured to receive a first input of a user to the first video processing apparatus;
a display unit 1006, configured to display, in response to the first input, a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
A processor 1010, configured to generate a target video according to the first video image sequence and the second video image sequence when the second video image sequence acquired by the second camera of the second video processing device is displayed on the video preview interface;
wherein the target video comprises at least one frame of a first video image of the first video image sequence and at least one frame of a second video image of the second video image sequence.
In the embodiment of the application, the video image sequences acquired by the cameras of different video processing devices can be displayed on the video preview interface, video editing is performed according to the first video image sequences and the second video image sequences acquired by the cameras of different video processing devices, and a target image comprising at least one frame of first video image and at least one frame of second video image is generated, wherein the at least one frame of first video image is from the first video image sequence generated by the first video processing device, and the at least one frame of second video image is from the second video image sequence generated by the second video processing device. According to the video processing method provided by the embodiment of the application, under the condition that the video image sequences respectively collected by the cameras of different video processing devices are displayed on the video preview interface, video editing is carried out on the different video image sequences generated by the different video processing devices, so that a target video is generated, professional video editing software is not required, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window and a first sub-window, the main window is used for displaying the first video image sequence, and the first sub-window is used for displaying the second video image sequence;
a user input unit 1007 for receiving a second input of a user to the first sub-window;
a processor 1010 for exchanging display content in the main window and the first sub-window in response to the second input; and video stitching is carried out on at least one frame of first video image and at least one frame of second video image displayed in the main window, so that the target video is obtained.
In this embodiment, when video is recorded, display contents of the main window and the first sub-window may be exchanged by performing a second input manner on the first sub-window, that is, the main window is switched to display a video image sequence collected by the second camera, so that the first sub-window is switched to display a video image sequence collected by the first camera, and since different video processing devices may perform video shooting of the same scene from different angles, position switching in the video recording process may be implemented; the target video can be obtained based on the content displayed in the main window, specifically, videos displayed in the main window are spliced in sequence according to the display sequence, so that the target video is obtained, recording can be performed in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
Optionally, a display unit 1006 is configured to display at least one device identifier, where the device identifier is configured to indicate a video processing device communicatively connected to the first video processing device;
A user input unit 1007 for receiving a third input of a target device identification of the at least one device identification by a user;
And a display unit 1006, configured to display, in response to the third input, in a second sub-window, a third video image sequence acquired by a third camera of a third video processing device, where the third video processing device identifies the indicated video processing device for the target device.
In the embodiment of the application, the device identifier of the video processing device which is used for indicating the communication connection with the first video processing device is displayed, the third input of the user for the target device identifier in the device identifier is received, the third video image sequence acquired by the third camera of the video processing device indicated by the target device identifier is displayed in the sub-window in response to the third input, video recording in a multi-machine-position mode is realized, video images in different machine positions are clipped to generate target videos, and the functions of displaying recorded videos and clipping videos at the same time can be realized by connecting a plurality of video processing devices with the first video processing device in a communication way, and the videos displayed in the main window are taken as the target videos (what you see is what you get), so that the complexity of video editing is simplified.
Optionally, the processor 1010 is configured to determine, according to the third video image sequence and the image content of the first video image sequence, a relative shooting orientation of the third camera, where the relative shooting orientation is a shooting orientation of the third camera relative to the first camera; and determining the target display position of the second sub-window according to the relative shooting azimuth.
In the embodiment of the application, the shooting azimuth of the third camera relative to the first camera can be determined according to the image content of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that a user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relation between the second sub-window and the main window in the video preview interface.
Optionally, a user input unit 1007 is configured to receive a fourth input of the user to the video preview interface;
a processor 1010, configured to, in response to the fourth input, control the first camera and the second camera to stop acquiring video images when the first video image sequence and the second video image sequence are video image sequences acquired in real time in a video recording process; and stopping playing the first video and the second video in response to the fourth input when the first video image sequence is a video image in the recorded first video and the second video image sequence is a video image in the recorded second video.
In the embodiment of the application, through the fourth input of the video preview interface, when the video content displayed by the main window and the sub window is a video image sequence acquired in real time, the cameras corresponding to the windows are controlled to stop acquiring video images in response to the fourth input; under the condition that the video content displayed by the main window and the sub window is the video image in the recorded video, each window can be stopped from playing each recorded video, and the recording stopping or playing of the video images collected by the cameras is realized through one-key input of a video preview interface.
Optionally, a user input unit 1007 is configured to receive a fifth input of the user to the target video;
A display unit 1006, configured to display a video adjustment window in response to the fifth input, where the video adjustment window includes an adjustment control, at least one first video thumbnail and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of second video image, and the adjustment control is used for updating the video frame of the target video;
a user input unit 1007 for receiving a sixth input of the user to the adjustment control;
And a processor 1010, configured to update a display position of the adjustment control in response to the sixth input, and update a video frame of the target video according to the updated display position of the adjustment control.
In the embodiment of the application, after the clipped target video is generated, the video frames at the splicing position in the target video can be adjusted, and a user can accurately adjust the position of the adjusting control by browsing the thumbnail of the video frames at the splicing position of the video, so that the aim of accurately adjusting the splicing position in the target video is fulfilled.
Optionally, a processor 1010 is configured to update a stitched end video frame of the first video image sequence; updating a splice starting video frame of the second video image sequence; adding or subtracting stitched video frames of the first sequence of video images; and adding or subtracting stitched video frames of the second sequence of video images.
In this embodiment, the start video frame and the end video frame at the splicing position can be increased, reduced or updated according to specific needs of the user, so that the video frames at the splicing position are more suitable.
Optionally, a user input unit 1007 is configured to receive a seventh input of the target video from the user;
A display unit 1006 for displaying a first video editing window of the target video at the first video processing device in response to the seventh input;
a user input unit 1007 for receiving an eighth input of a user to the first video editing window;
A processor 1010 for updating the target video according to editing information in response to the eighth input, the editing information being determined according to the eighth input;
And the radio frequency unit 1001 is configured to send the editing information to a second video processing apparatus, so that the second video processing apparatus synchronously updates the target video according to the editing information.
In the embodiment of the application, the target video can be edited by a plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs to cooperate by a plurality of persons in the video editing process is met, and the editing efficiency is improved.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 1010 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the video processing method embodiment described above, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the video processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. A video processing method, comprising:
receiving a first input from a user to a first video processing device;
In response to the first input, displaying a first sequence of video images acquired by a first camera of the first video processing device on a video preview interface;
Under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence;
Wherein the target video comprises at least one frame of a first video image in the first video image sequence and at least one frame of a second video image in the second video image sequence;
The method further comprises the steps of:
determining a relative shooting azimuth of a third camera according to a third video image sequence and the image content of the first video image sequence, wherein the relative shooting azimuth is the shooting azimuth of the third camera relative to the first camera;
determining a target display position of the second sub-window according to the relative shooting azimuth;
and displaying the third video image sequence acquired by the third camera of a third video processing device in the second sub-window.
2. The video processing method of claim 1, wherein the video preview interface comprises a main window for displaying the first sequence of video images and a first sub-window for displaying the second sequence of video images;
the generating a target video according to the first video image sequence and the second video image sequence comprises:
Receiving a second input of a user to the first sub-window;
exchanging display content in the main window and the first sub-window in response to the second input;
And video stitching is carried out on at least one frame of first video image and at least one frame of second video image displayed in the main window, so that the target video is obtained.
3. The video processing method according to claim 1, wherein displaying the third video image sequence acquired by the third camera of a third video processing apparatus in the second sub-window includes:
Displaying at least one device identification indicating a video processing device communicatively coupled to the first video processing device;
Receiving a third input by a user of a target device identification of the at least one device identification;
And in response to the third input, displaying the third video image sequence acquired by the third camera of a third video processing device in the second sub-window, wherein the third video processing device identifies the indicated video processing device for the target device.
4. The video processing method according to claim 1, characterized in that the video processing method further comprises:
Receiving a fourth input of a user to the video preview interface;
When the first video image sequence and the second video image sequence are video image sequences acquired in real time in a video recording process, responding to the fourth input, and controlling the first camera and the second camera to stop acquiring video images;
and stopping playing the first video and the second video in response to the fourth input when the first video image sequence is a video image in the recorded first video and the second video image sequence is a video image in the recorded second video.
5. The video processing method according to claim 1, characterized in that the video processing method further comprises:
receiving a fifth input of a user to the target video;
In response to the fifth input, displaying a video adjustment window including an adjustment control therein, at least one first video thumbnail and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of second video image, and the adjustment control is used for updating the video frame of the target video;
receiving a sixth input of a user to the adjustment control;
And responding to the sixth input, updating the display position of the adjusting control, and updating the video frame of the target video according to the updated display position of the adjusting control.
6. The video processing method of claim 5, wherein the updating the video frames of the target video comprises at least one of:
Updating the spliced video frame of the first video image sequence;
Updating a splice starting video frame of the second video image sequence;
Adding or subtracting stitched video frames of the first sequence of video images;
and adding or subtracting stitched video frames of the second sequence of video images.
7. The video processing method according to claim 1, wherein after generating a target video from the first video image sequence and the second video image sequence, the video processing method further comprises:
Receiving a seventh input of a user to the target video;
Displaying a first video editing window of the target video at the first video processing device in response to the seventh input;
Receiving an eighth input of a user to the first video editing window;
Updating the target video according to editing information in response to the eighth input, the editing information being determined according to the eighth input;
And sending the editing information to a second video processing device so that the second video processing device synchronously updates the target video according to the editing information.
8. A first video processing apparatus, comprising:
a first receiving module for receiving a first input from a user to a first video processing device;
The first display module is used for responding to the first input and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
The generation module is used for generating a target video according to the first video image sequence and the second video image sequence under the condition that a second video image sequence acquired by a second camera of the second video processing device is displayed on a video preview interface;
Wherein the target video comprises at least one frame of a first video image in the first video image sequence and at least one frame of a second video image in the second video image sequence;
The first determining module is used for determining the relative shooting azimuth of the third camera according to the third video image sequence and the image content of the first video image sequence, wherein the relative shooting azimuth is the shooting azimuth of the third camera relative to the first camera;
the second determining module is used for determining the target display position of the second sub-window according to the relative shooting azimuth;
And the third display module is used for displaying the third video image sequence acquired by the third camera of the third video processing device in the second sub-window.
9. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video processing method of any one of claims 1 to 7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any of claims 1 to 7.
CN202111091200.9A 2021-09-16 2021-09-16 Video processing method, device, electronic equipment and readable storage medium Active CN113794923B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111091200.9A CN113794923B (en) 2021-09-16 Video processing method, device, electronic equipment and readable storage medium
PCT/CN2022/118527 WO2023040844A1 (en) 2021-09-16 2022-09-13 Video processing method and apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111091200.9A CN113794923B (en) 2021-09-16 Video processing method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113794923A CN113794923A (en) 2021-12-14
CN113794923B true CN113794923B (en) 2024-06-28

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194227A (en) * 2021-04-14 2021-07-30 上海传英信息技术有限公司 Processing method, mobile terminal and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194227A (en) * 2021-04-14 2021-07-30 上海传英信息技术有限公司 Processing method, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
US20160027201A1 (en) Image processing method, image processing device and image processing program
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
JP7279108B2 (en) Video processing method and apparatus, storage medium
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
WO2022089284A1 (en) Photographing processing method and apparatus, electronic device, and readable storage medium
US11848031B2 (en) System and method for performing a rewind operation with a mobile image capture device
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN112087579B (en) Video shooting method and device and electronic equipment
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
CN113794923B (en) Video processing method, device, electronic equipment and readable storage medium
CN114025237B (en) Video generation method and device and electronic equipment
WO2022105673A1 (en) Video recording method and electronic device
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114745505A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
WO2023040844A1 (en) Video processing method and apparatus, electronic device, and readable storage medium
CN113596329A (en) Photographing method and photographing apparatus
CN114078280A (en) Motion capture method, motion capture device, electronic device and storage medium
CN114520874B (en) Video processing method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
WO2023160143A1 (en) Method and apparatus for viewing multimedia content
CN114745507A (en) Shooting method, shooting device, electronic equipment and readable storage medium
WO2023226694A1 (en) Video recording method and apparatus, and storage medium
WO2023226699A1 (en) Video recording method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant