CN117479025A - Video processing method, video processing device, electronic equipment and medium - Google Patents

Video processing method, video processing device, electronic equipment and medium Download PDF

Info

Publication number
CN117479025A
CN117479025A CN202311550337.5A CN202311550337A CN117479025A CN 117479025 A CN117479025 A CN 117479025A CN 202311550337 A CN202311550337 A CN 202311550337A CN 117479025 A CN117479025 A CN 117479025A
Authority
CN
China
Prior art keywords
video frame
image
camera module
video
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311550337.5A
Other languages
Chinese (zh)
Inventor
邓智桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311550337.5A priority Critical patent/CN117479025A/en
Publication of CN117479025A publication Critical patent/CN117479025A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a medium, and belongs to the technical field of video data processing. The video processing method is applied to the electronic equipment, and the electronic equipment comprises the following steps: the first camera shooting module and the second camera shooting module, and the video processing method comprises the following steps: in the process of switching from the first camera module to the second camera module for video shooting, acquiring a first video frame acquired by the first camera module and a second video frame acquired by the second camera module, wherein the first video frame and the second video frame are acquired simultaneously; intercepting a first area image in the second video frame; and carrying out image fusion processing on the first video frame and the first region image to obtain a third video frame.

Description

Video processing method, video processing device, electronic equipment and medium
Technical Field
The application belongs to the technical field of video data processing, and particularly relates to a video processing method, a video processing device, electronic equipment and a medium.
Background
In order to meet the increasing shooting demands of people, mobile terminals equipped with a plurality of camera modules are becoming popular. The lens of each of the plurality of camera modules can have different focal lengths for zooming shooting of different focal segments.
However, in the process of video photographing using a mobile terminal equipped with a plurality of camera modules, there is an inevitable need to switch photographed scenes using different camera modules. However, the configuration of different camera modules is typically different. For example, the image sensors corresponding to different imaging modules are different, the mounting positions are different, and the like. These differences may cause frame problems such as jitter, jamming, excessive color and brightness differences between video frames photographed before and after switching, and the like in the process of switching and photographing of the photographing module, resulting in poor video photographing effect.
Disclosure of Invention
An object of the embodiment of the application is to provide a video processing method, a video processing device and electronic equipment, which can solve the problem of picture appearing in the current shooting process of switching of a shooting module.
In a first aspect, an embodiment of the present application provides a video processing method, which is applied to an electronic device, where the electronic device includes: the method comprises the steps of:
in the process of switching from the first camera module to the second camera module for video shooting, acquiring a first video frame acquired by the first camera module and a second video frame acquired by the second camera module, wherein the first video frame and the second video frame are acquired simultaneously;
Intercepting a first area image in a third video frame, wherein the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
and carrying out image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
In a second aspect, an embodiment of the present application provides a video processing apparatus, which is applied to an electronic device, where the electronic device includes: the device comprises a first camera module and a second camera module, wherein the device comprises:
the acquisition module is used for acquiring a first video frame acquired by the first camera module and a second video frame acquired by the second camera module in the process of switching from the first camera module to the second camera module for video shooting, wherein the first video frame and the second video frame are acquired simultaneously;
the intercepting module is used for intercepting a first area image in a third video frame, wherein the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
The fusion module is used for carrying out image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
In a third aspect, an embodiment of the present application provides an electronic device, including: the video processing system comprises a first camera module, a second camera module, a processor and a memory, wherein the memory stores programs or instructions capable of running on the processor, and the programs or instructions realize the steps of the video processing method according to any one of the first aspect when being executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the first video frame and the second video frame which are simultaneously acquired by the first camera shooting module and the second camera shooting module are acquired so as to intercept a first area image in the second video frame, and the first video frame and the first area image are subjected to image fusion processing to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a first region image capture provided in an embodiment of the present application;
FIG. 3 is a schematic illustration of another first region image capture provided by an embodiment of the present application;
FIG. 4 is a schematic view of a principle of capturing an image of a first region according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an image fusion method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a video frame provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of another video frame provided by an embodiment of the present application;
FIG. 8 is a flowchart of another video processing method provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a consistency process provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of video frame processing provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of another video frame processing provided by an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a block diagram of a video processing apparatus provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of still another electronic device according to an embodiment of the present application;
Fig. 15 is a schematic structural diagram of still another electronic device according to an embodiment of the present application;
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The data processing method, the sub-chip and the electronic device provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a video processing method according to an embodiment of the present application is shown. The video processing method can be applied to an electronic device, and the electronic device comprises: the first camera module and the second camera module. Alternatively, the electronic device may be a mobile terminal. The mobile terminal may refer to a mobile phone, a tablet, a computer, a wearable device, etc. As shown in fig. 1, the video processing method includes:
step 101, in the process of switching from the first camera module to the second camera module for video shooting, a first video frame collected by the first camera module and a second video frame collected by the second camera module are obtained. The first video frame and the second video frame are acquired simultaneously.
In this embodiment of the application, switch to the second by first module of making a video recording and make a video recording the in-process that the module made a video recording carries out video shooting, first module of making a video recording and the second module of making a video recording all are in operating condition, gather the video frame simultaneously. The electronic equipment acquires the video frames acquired by the first camera module and the second camera module at the same time to carry out subsequent processing.
In an alternative implementation, the electronic device may include a main control chip and an image processing chip, the main control chip being connected to the image processing chip. The original image (pure raw) data collected by the first camera module and the second camera module can be transmitted to the image processing chip through the main control chip, so that the image processing chip executes the video processing method provided by the embodiment of the application. The process of the electronic device obtaining the first video frame collected by the first camera module and the second video frame collected by the second camera module may include:
the main control chip performs image preprocessing on the original image data acquired by the first camera module to obtain a first video frame; the main control chip performs image preprocessing on the original image data acquired by the second camera module to obtain a second video frame; and the main control chip transmits the first video frame and the second video frame to the image processing chip.
The main control chip can receive the original image data acquired by the first camera module and the second camera module simultaneously in the process of video shooting by switching the first camera module to the second camera module. Performing image preprocessing on the original image data of the first camera module to obtain a first video frame; and performing image preprocessing on the original image data of the second camera module to obtain a second video frame. Optionally, the image preprocessing may include at least one of the following processing modes: dead pixel removal processing, lens correction processing, gain processing, and the like.
In this way, the main control chip is utilized to preprocess the original image data acquired by the first camera module and the second camera module so as to obtain a first video frame and a second video frame. The quality of the image data of the first video frame and the second video frame can be improved, and the subsequent processing effect is convenient. And the main control chip is utilized to execute image preprocessing, so that the processing work of the image processing chip can be shared, and the processing efficiency of the image processing chip is improved. Correspondingly, the video processing method executed by the first video frame and the second video frame is executed by the image processing chip instead of the main control chip, so that the processing work of the main control chip is reduced, and the power consumption of the main control chip is reduced.
In another alternative implementation, the electronic device may include an image processing chip. The original image data collected by the first camera module and the second camera module can be directly transmitted to the image processing chip, so that the image processing chip executes the video processing method provided by the embodiment of the application.
Correspondingly, the process of the electronic device obtaining the first video frame collected by the first camera module and the second video frame collected by the second camera module may include: the image processing chip performs image preprocessing on the original image data acquired by the first camera module to obtain a first video frame; and the image processing chip performs image preprocessing on the original image data acquired by the second camera module to obtain a second video frame.
Step 102, intercepting a first area image in a third video frame. The third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame.
In the embodiment of the application, two switching situations exist. In the first switching case, a first field of view (FOV) of the first camera module is greater than a second FOV of the second camera module. The first camera module is switched to the second camera module to carry out video shooting, and then the large view angle is switched to the small view angle to carry out video shooting. In the second switching case, the first angle of view of the first camera module is smaller than the second angle of view of the second camera module. The first camera module is switched to the second camera module to carry out video shooting, and then the small view angle is switched to the large view angle to carry out video shooting. In this embodiment of the present application, the electronic device may intercept the first area image in the third video frame acquired with the relatively large field angle for subsequent processing. It will be appreciated that if the first angle of view is greater than the second angle of view, then the third video frame is the first video frame. If the first angle of view is less than the second angle of view, the third video frame is the second video frame.
Wherein the first region image is equal in size to the fourth video frame. The fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame. It will be appreciated that if the first angle of view is greater than the second angle of view, then the fourth video frame is the second video frame. If the first angle of view is smaller than the second angle of view, the fourth video frame is the first video frame.
When the overlapping range of the second angle of view and the first angle of view is the second angle of view or the first angle of view, that is, when the first angle of view covers all of the second angle of view or the second angle of view covers all of the first angle of view, the center of the first area image overlaps with the center of the fourth video frame.
For example, assuming that the second angle of view is greater than the first angle of view, the third video frame is the second video frame and the fourth video frame is the first video frame. Then, as shown in fig. 2, the second view angle covers all the first view angles, the second video frame collected by the second camera module is 201, and the first video frame collected by the first camera module is 202. And intercepting a first area image which is coincident with the first video frame as 201 center and is equal in size in the second video frame 201. As shown in fig. 3, the second angle of view covers part of the first angle of view. The second video frame that the second camera module gathered is 201, and the first video frame that the first camera module gathered is 202. And intercepting a first area image 2011 which is equal to the second video frame 201 in size in the second video frame 201.
Further exemplary, as shown in fig. 4, it is assumed that the second field angle of the second camera module is larger than the first field angle of the first camera module, the third video frame is a second video frame, and the fourth video frame is a first video frame. In the case where the focusing distance at the time of capturing the first video frame and the second video frame is distance_0, the second angle of view covers all of the first angle of view, and the second angle of view edge and the first angle of view edge do not overlap. The center of the first region image is aligned with the center of the first video frame, and the first region image is equal in size to the first video frame. When the focusing distance is distance_1 when the first video frame and the second video frame are acquired, the second view angle covers all the first view angles, and overlapping edges exist between the second view angle edge and the first view angle edge, the center of the first area image is aligned with the center of the first video frame, and the first area image is equal to the first video frame in size. In the case where the focusing distance at the time of capturing the first video frame and the second video frame is distance_2, the second field angle covers the first field angle of the portion, the center of the first area image is not aligned with the center of the first video frame, and the first area image is equal in size to the first video frame. In fig. 5, the first region image is represented by a rectangle filled with no color, the first video frame is represented by a rectangle filled with oblique lines, and the centers of the first region image and the first video frame are represented by circles.
In some embodiments of the present application, the electronic device may align the first field of view and the second field of view according to the focusing distance using a calibration method. Taking the case shown in fig. 5 as an example, the specific sizes of the first field angle and the second field angle are different under different focusing distances of the first camera module and the second camera module. For example, after the setting positions and angles of the first image capturing module and the second image capturing module on the electronic device are fixed, the difference of the angles of view between the two is relatively fixed at each focusing distance. Therefore, under the condition that the second view angle covers the complete first view angle, the electronic device can center align the second video frame and the first video frame with the center of the first video frame according to the real-time focusing distance, so that the first video frame and the second video frame are overlapped, and smooth switching is realized.
Alternatively, the electronic device may determine a center position of the first region image before capturing the first region image in the third video frame, and acquire the first lateral length and the first longitudinal length of the fourth video frame. And then, according to the central position, the first transverse length and the first longitudinal length, the first area image is intercepted in the third video frame. The lateral length of the first region image is a first lateral length, and the longitudinal length of the first region image is a first longitudinal length.
Based on this, the electronic device may also perform steps 01 and 02 before intercepting the first region image in the third video frame in step 102.
In step 01, a focus distance when the first video frame and the second video frame are acquired, and a first lateral length and a first longitudinal length of the fourth video frame are acquired.
The focusing distance refers to the sum of the distance from the lens in the first image capturing module/the second image capturing module to the shooting object and the distance from the lens to the image sensor. Because when the first camera module and the second camera module carry out video shooting at the same time, the focusing distances of the first camera module and the second camera module are approximately equal. Therefore, the focusing distance acquired by the electronic device may be the focusing distance of the first camera module, or the focusing distance of the second camera module.
In step 02, a center position of the first area image is calculated according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module, and the relative positions of the first camera module and the second camera module.
In some embodiments of the present application, the electronic device may calculate a lateral distance and a longitudinal distance between a center of the first area image and an edge of the field angle corresponding to the third video frame according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module, and a relative position of the first camera module and the second camera module, so as to obtain a center position of the first area image.
For example, taking the case that the second field angle is larger than the first field angle, as shown in fig. 2, the electronic device calculates a lateral distance dx and a longitudinal distance dy of the field angle edge corresponding to the second video frame at the center of the first area image, to obtain a center position (dx, dy) of the first area image.
Optionally, the electronic device may calculate the center position of the first area image according to the focusing distance, the center position of the fourth video frame, the first field angle of the first camera module, the second field angle of the second camera module, and the relative positions of the first camera module and the second camera module by using the first formula and the second formula.
Wherein the first formula satisfies:
the second formula satisfies:
wherein x is 2 The abscissa representing the center position of the first region image, dx. y is 2 And an ordinate representing the center position of the first area image, i.e. dy. Δdx represents the relative positions of the first camera module and the second camera module in the lateral direction. Δdy represents the relative position of the first camera module and the second camera module in the longitudinal direction. L represents the focusing distance. θ 1 Representing the corresponding field angle of the fourth video frame. θ 2 Representing the corresponding field angle of the third video frame. X is x 1 And an abscissa indicating a center position of the fourth video frame. y is 1 And an ordinate representing the center position of the fourth video frame. Pixesize 1 The pixel size of the fourth video frame, that is, the pixel size of the image photographed at the view angle corresponding to the fourth video frame is represented. Pixesize 2 The pixel size of the third video frame, that is, the pixel size of the image photographed at the view angle corresponding to the third video frame is represented.
At a second angle of viewFor example, greater than the first angle of view, θ 1 Representing a first angle of view. θ 2 Representing a second angle of view. X is x 1 And an abscissa representing the center position of the first video frame. y is 1 An ordinate representing the center position of the first video frame. Pixesize 1 The pixel size of an image taken at a first field angle is represented. Pixesize 2 The pixel size of the image taken at the second field angle is represented. As shown in fig. 4, in the case where the focusing distance at the time of capturing the first video frame and the second video frame is distance_0, the electronic device calculates a lateral distance dx_0 and a longitudinal distance dy_0 between the center of the first area image and the edge of the first view angle, and obtains the center positions (dx_0, dy_0) of the first area image. Under the condition that the focusing distance is distance_1 when the first video frame and the second video frame are acquired, the electronic equipment calculates to obtain a transverse distance dx_1 and a longitudinal distance dy_1 between the center of the first area image and the edge of the first field angle, and obtains the center positions (dx_1 and dy_1) of the first area image. Under the condition that the focusing distance is distance_2 when the first video frame and the second video frame are acquired, the electronic equipment calculates to obtain a transverse distance dx_2 and a longitudinal distance dy_2 between the center of the first area image and the edge of the first field angle, and obtains the center positions (dx_2 and dy_2) of the first area image. In fig. 4, only the lateral distance between the center of the first region image and the first angle of view edge is shown.
And 103, performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame. The fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
Optionally, the electronic device may directly perform image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame. The image fusion processing can be alpha fusion, pyramid fusion, poisson fusion or the like.
However, when the angle of view corresponding to the third video frame covers the angle of view corresponding to the portion of the fourth video frame and the angle of view corresponding to the third video frame does not cover the angle of view corresponding to the fourth video frame at all, there is an image offset problem in the third video frame and the fourth video frame, and thus there is an image offset problem in the first area image and the fourth video frame. As shown in fig. 3 and 4, in the case where the second angle of view covers part of the first angle of view, there is an image offset problem with the first video frame and the second video frame, which in turn results in an image offset problem with the first region image and the first video frame. As the overlapping range of the second angle of view and the first angle of view is smaller, the first video frame and the second video frame are more offset, and the first area image and the first video frame are more offset. Even in the case where the second angle of view and the first angle of view do not have an overlapping range, the first video frame and the second video frame may not have overlapping images, and the first area image and the first video frame do not have overlapping images. In the case where the focus distance is from distance_1 to distance_2 as in fig. 4, there is a relative shift between the first video frame and the second video frame, and there is a relative shift between the first region image and the first video frame.
However, the degree of offset between the first region image and the fourth video frame directly affects the degree of overlapping between the first region image and the fourth video frame, and further directly affects the fusion effect of the first region image and the fourth video frame. Therefore, in some embodiments of the present application, in order to reduce the influence of the offset of the first video frame and the second video frame on the image fusion effect, the electronic device may fade the first area image or the fourth video frame to reduce the transparency of the first area image or the fourth video frame, reduce the visual non-overlapping degree of the first area image and the fourth video frame, improve the image fusion effect of the first area image and the fourth video frame, and ensure the smoothness of the photographed image in the process of switching the photographing module to photograph the image.
Optionally, the electronic device performs image fusion processing on the fourth video frame and the first area image, and the process of obtaining the fifth video frame may include the following steps 1031 to 1032.
In step 1031, when the overlapping range of the second angle of view of the second image capturing module and the first angle of view of the first image capturing module is the second angle of view or the first angle of view, performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame.
The overlapping range of the second angle of view and the first angle of view is the second angle of view or the first angle of view, that is, it means that the first angle of view covers the full second angle of view, or the second angle of view covers the full first angle of view. Under the condition that the overlapping range of the second view angle of the second shooting module and the first view angle of the first shooting module is the second view angle or the first view angle, the first area image and the fourth video frame are completely overlapped, the offset problem does not exist, and the electronic equipment can directly perform image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame with good image fusion effect.
For example, in the case of focusing distances distance_0 and distance_1 in fig. 4 as in the case of fig. 2, the first region image and the fourth video frame are completely overlapped, and there is no offset problem. The electronic device directly performs image fusion processing on the fourth video frame and the first area image.
In step 1032, when the overlapping range of the second angle of view and the first angle of view is not the second angle of view or the first angle of view, the transparency of the target image is reduced, and the image fusion processing is performed on the fourth video frame and the first area image, so as to obtain a fifth video frame.
The target image is all or part of the image of the first video frame acquired by the first camera module. In the case where the third video frame is the first video frame, the target image is a partial image in the first video frame, that is, a first region image. In the case where the third video frame is the second video frame, the target image is the fourth video frame, i.e., the first video frame.
The overlapping range of the second angle of view and the first angle of view is not the second angle of view or the first angle of view, that is, the overlapping range of the second angle of view and the first angle of view is a partial second angle of view, or a partial first angle of view, or the second angle of view and the first angle of view do not have an overlapping range.
In the case where the overlapping range of the second angle of view and the first angle of view is not the second angle of view or the first angle of view, the first region image and the fourth video frame may not overlap completely, and there is a problem of offset. The electronic device may reduce transparency of the target image, and perform image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame. Alternatively, the electronic device may decrease the transparency of the target image by the target value. The target value may be determined based on the actual image fusion effect. Alternatively, the target value may be custom set by the user.
Illustratively, as shown in fig. 5, the third video frame is taken as the second video frame, and the fourth video frame is taken as the first video frame as an example. The first region image 501 has an offset in the first direction r compared to the first video frame 502.
The electronic device intercepts the first region image 501 in the second video frame, reducing the transparency of the first video frame 502. And performing image fusion processing on the first video frame 502 with the reduced transparency and the first region image 501 to obtain a fifth video frame 503. Note that, in the fifth video frame 503 of fig. 5, the transparency of the first video frame 502 is reduced as indicated by a dotted line.
In this way, when the overlapping range of the second view angle and the first view angle is not the second view angle or the first view angle, by reducing the transparency of the target image, that is, when the first area image and the fourth video frame have the image offset problem, by reducing all or part of the image of the first video frame collected by the first camera module, the image characteristics of the second video frame collected by the switching destination module (that is, the second camera module) can be more reserved for the fifth video frame after the image fusion on the basis of reducing the visual non-overlapping degree of the first area image and the fourth video frame. Therefore, on the basis of improving the image fusion effect of the first region image and the fourth video frame, the shooting effect of the shooting module switching is improved.
Alternatively, the degree of decrease in transparency of the target image may be inversely proportional to the size of the overlapping range of the second angle of view and the first angle of view. That is, the smaller the overlapping range of the second angle of view and the first angle of view, the lower the transparency of the target image. Based on this, the process of the electronic device reducing the transparency of the target image may include: and determining an adjustment variable value according to the proportion of the second view angle covering the first view angle. And reducing the transparency of the target image according to the adjustment variable value.
Wherein the ratio of the second angle of view to the first angle of view is inversely proportional to the adjustment variable value. The adjustment variable value is a decrease variable of the transparency of the target image. For example, the adjustment variable value may be a sum of a product of the scale and the adjustment coefficient and a reference variable value. The value of the adjustment coefficient is larger than 1.
In this way, in the case where the degree of overlap of the first angle of view and the second angle of view is smaller, the degree of shift of the first video frame and the second video frame is larger, resulting in the degree of shift of the first area image and the fourth video frame being larger. Therefore, the adjusting variable value which is inversely proportional to the ratio is determined through the proportion of the second view angle to the first view angle, and then the transparency of the target image is reduced according to the adjusting variable value, so that the lower the proportion of the second view angle to the first view angle is, the greater the reduction degree of the transparency of the target image is, the visual non-overlapping degree of the first region image and the fourth video frame is further reduced, and the image fusion effect of the first region image and the fourth video frame is better improved.
In the embodiment of the application, in the process of video shooting by switching from the first camera shooting module to the second camera shooting module, the first area image in the third video frame is intercepted, and the image fusion processing is performed on the fourth video frame and the first area image, so that in the process of video shooting by switching from the first camera shooting module to the second camera shooting module, the electronic equipment can obtain and output the fifth video frame after the fusion processing.
Thus, as shown in fig. 6, when the first camera module is used for video shooting, the first camera module is triggered to switch to the second camera module for video shooting, and then the camera module is switched to complete video shooting by the second camera module, the whole video obtained by shooting mainly comprises three video frames. The three-part video frame includes: a first video frame 601 acquired in a first stage of video shooting by using a first camera module, a fifth video frame 602 obtained by performing image fusion processing on a second stage image acquired in a second stage of video shooting by switching from the first camera module to a second camera module, and a second video frame 603 acquired in a third stage of video shooting by using a second camera module.
In the related art, as shown in fig. 7, when the first camera module is used for video shooting, the first camera module is triggered to switch to the second camera module for video shooting, and then the camera module is switched to complete video shooting by the second camera module, the whole video obtained by shooting mainly comprises two video frames. The two-part video frame includes: a first video frame 601 acquired by the first camera module in the first stage and the second stage, and a second video frame 603 acquired by the second camera module in the third stage. However, the configuration of different camera modules is typically different. For example, the image sensors corresponding to different imaging modules are different, the mounting positions are different, and the like. These differences may cause frame problems such as jitter, jamming, excessive color and brightness differences between video frames photographed before and after switching, and the like in the process of switching and photographing of the photographing module, resulting in poor video photographing effect.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the first video frame and the second video frame which are simultaneously acquired by the first camera shooting module and the second camera shooting module are acquired so as to intercept a first area image in the second video frame, and the first video frame and the first area image are subjected to image fusion processing to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
Referring to fig. 8, a flowchart of another video processing method according to an embodiment of the present application is shown. The video processing method can be applied to an electronic device, and the electronic device comprises: the first camera module and the second camera module. Alternatively, the electronic device may be a mobile terminal. The mobile terminal may refer to a mobile phone, a tablet, a computer, a wearable device, etc. As shown in fig. 8, the video processing method includes:
step 801, in the process of switching from the first camera module to the second camera module for video shooting, a first video frame collected by the first camera module and a second video frame collected by the second camera module are obtained. The first video frame and the second video frame are acquired simultaneously.
The explanation and implementation of this step may refer to the explanation and implementation of step 101, which is not described herein in detail.
Step 802, performing consistency correction processing on brightness and color of the second video frame according to brightness difference information and color difference information of the second video frame compared with the first video frame when the overlapping range of the second view angle of the second camera module and the first view angle of the first camera module is the second view angle or the first view angle, so as to obtain the processed second video frame.
Optionally, when the overlapping range of the second field angle of the second image capturing module and the first field angle of the first image capturing module is the second field angle or the first field angle, the electronic device may perform brightness consistency correction processing on the second video frame according to brightness difference information of the second video frame compared with the first video frame, so as to obtain the intermediate video frame. And the electronic equipment performs color consistency correction processing on the second video frame according to the color difference information of the second video frame compared with the first video frame, so as to obtain the processed second video frame.
If the third video frame is the second video frame, the fourth video frame is the first video frame. The first region image completely overlapping with the image content of the first video frame exists in the second video frame in the case where the overlapping range of the second angle of view of the second image capturing module and the first angle of view of the first image capturing module is the second angle of view or the first angle of view. The electronic device may perform consistency correction processing on the brightness and the color of the first area image in the second video frame according to the brightness difference information and the color difference information of the corresponding pixel in the first area image and the first video frame, so as to obtain a processed second video frame.
If the third video frame is the first video frame, the fourth video frame is the second video frame. If the overlapping range of the second angle of view of the second image capturing module and the first angle of view of the first image capturing module is the second angle of view or the first angle of view, there is a first region image in the first video frame that completely overlaps with the image content of the second video frame. The electronic device may perform consistency correction processing on the brightness and the color of the first area image in the second video frame according to the brightness difference information and the color difference information of the corresponding pixels in the first video frame and the first area image, so as to obtain a processed second video frame.
As illustrated in fig. 9, for example, there is a first region image 9011 in the first video frame 901 that completely overlaps with the image content of the second video frame 902. The electronic equipment carries out consistency correction processing on brightness and color of the 1 st row and 1 st column pixels in the first area image of the second video frame according to the brightness difference information and the color difference information of the 1 st row and 1 st column pixels in the first area image of the first video frame; ... According to the brightness difference information and the color difference information of the ith row and the jth column of pixels in the first video frame and the first region image, brightness and color consistency correction processing is carried out on the ith row and the jth column of pixels in the first region image of the second video frame, and a processed second video frame is obtained; ... And according to the brightness difference information and the color difference information of the p-th row and the q-th column of pixels in the first video frame and the first area image, carrying out brightness and color consistency correction processing on the p-th row and the q-th column of pixels in the first area image of the second video frame to obtain a processed second video frame. p is the number of pixel rows in the first video frame, i is less than or equal to p, and is a positive integer. q is the number of pixel rows in the first video frame, j is less than or equal to q, and is a positive integer.
In an alternative implementation manner, the electronic device performs the consistency correction processing of the brightness and the color on the second video frame according to the brightness difference information and the color difference information of the second video frame compared with the first video frame, and the process of obtaining the processed second video frame may include the following steps 8021 to 8024.
In step 8021, a luminance value, a red component value, a green component value, and a blue component value of each pixel in the first video frame and the second video frame are acquired.
Optionally, the electronic device may acquire a brightness value, a red component value, a green component value, and a blue component value of each pixel in the first video frame from the original image data acquired by the first image capturing module; and acquiring a brightness value, a red component value, a green component value and a blue component value of each pixel in the second video frame from the original image data acquired by the second camera module.
In step 8022, a first color ratio and a second color ratio are calculated for each pixel in the first video frame and the second video frame. The first color ratio is the ratio of the red component value to the green component value. The second color ratio is the ratio of the blue component value to the green component.
In this embodiment, the electronic device calculates a first color ratio and a second color ratio of each pixel in the first video frame, and a first color ratio and a second color ratio of each pixel in the second video frame, to obtain brightnessDifference information C grid 1 and C grid 2。C grid 1 is R grid 1/G grid 1&B grid 1/G grid 1。C grid 2 is R grid 2/G grid 2&B grid 2/G grid 2。
Wherein C is grid 1 represents a first color ratio R of a first video frame grid 1/G grid 1 and a second color ratio B grid 1/G grid 1。R grid 1、G grid 1、B grid 1 in turn represents the red component value, the green component value, the blue component value for each pixel in the first video frame. C (C) grid 2 represents a first color ratio R of a second video frame grid 2/G grid 2 and a second color ratio B grid 2/G grid 2。R grid 2、G grid 2、B grid 2 in turn represent the red, green, blue component values for each pixel in the second video frame.
In step 8023, a white balance gain, a color matrix, and a tone gain corresponding to each pixel in the second video frame are calculated according to the brightness, the first color ratio, and the second color ratio of the corresponding pixel in the first video frame and the second video frame, respectively.
Optionally, the electronic device calculates a pixel average value of the sub-pixels of each pixel in the first video frame as the luminance of the pixel. A pixel average value of the sub-pixels of each pixel in the second video frame is calculated as the luminance of that pixel.
Taking as an example the calculation of the luminance of each pixel in the first video frame. The electronic device calculates the brightness of each pixel in the first video frame according to the third formula. The third formula satisfies:
L grid-avg =Sum(PixelValue grid )/PixelNum;
wherein L is grid-avg Representing the brightness of each pixel in the first video frame. PixelValue grid Representing the pixel value of a sub-pixel, sum (PixelValue grid ) Representing the sum of the pixel values of all sub-pixels in the pixel. PixelNum represents the number of subpixels in a pixel.
In an alternative implementation manner, the electronic device may calculate an accumulated value of first color ratios of a plurality of pixels corresponding to the second video frame in the first video frame, to obtain the first color ratio of the first video frame; and calculating accumulated values of second color ratios of a plurality of pixels corresponding to the second video frame in the first video frame to obtain the second color ratio of the first video frame. Calculating accumulated values of first color ratios of a plurality of pixels corresponding to the first video frame in the second video frame to obtain the first color ratio of the second video frame; and calculating accumulated values of second color ratios of a plurality of pixels corresponding to the first video frame in the second video frame to obtain the second color ratio of the second video frame. The electronic device calculates a ratio of a first color ratio of the first video frame to a first color ratio of the second video frame and a ratio of a second color ratio of the first video frame to a second color ratio of the second video frame to obtain a white balance gain (wb gain) of each pixel in the second video frame.
The electronic device may obtain a red component coefficient of a corresponding pixel in the first video frame and the second video frame from a ratio of the first color ratio of the single pixel in the first video frame to the first color ratio of the corresponding pixel in the second video frame to obtain the red component coefficient of each corresponding pixel in the first video frame and the second video frame. The electronic device obtains a blue component coefficient of a corresponding pixel in the first video frame and the second video frame by comparing a second color ratio of a single pixel in the first video frame with a second color ratio of a corresponding pixel in the second video frame to obtain the blue component coefficient of each corresponding pixel in the first video frame and the second video frame. A color matrix (color matrix) of each corresponding pixel in the second video frame is constructed from the red component coefficient and the blue classification coefficient of each corresponding pixel in the first video frame and the second video frame. In an alternative case, if there is a pixel in the second video frame that does not correspond to the first video frame, the initial color matrix may be taken as the color matrix for that pixel in the second video frame. The initial color matrix may indicate that the correlation value of the pixel is not changed.
The electronic device may obtain a tonal gain (local gain) of a corresponding pixel in the first video frame and the second video frame by comparing a ratio of a luminance of a single pixel in the first video frame to a luminance of a corresponding pixel in the second video frame. In an alternative case, if there is a pixel in the second video frame that does not correspond to the first video frame, the initial tone gain may be taken as the tone gain of that pixel in the second video frame. The hue gain may indicate that the correlation value of the pixel is not changed.
In step 8024, according to the white balance gain, the color matrix and the tone gain, white balance processing, color correction processing and tone mapping processing are sequentially performed on each pixel in the second video frame, so as to obtain a processed second video frame.
In this embodiment of the present application, the electronic device performs white balance processing on each pixel in the second video frame according to the white balance gain, performs color correction processing on each pixel in the second video frame according to the color matrix, and performs tone mapping processing on each pixel in the second video frame according to the tone gain, so as to obtain a processed second video frame.
Step 803, the first area image in the third video frame is intercepted. The third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame.
The explanation and implementation of this step may refer to the explanation and implementation of step 102, which is not described herein in detail.
And 804, performing image fusion processing on the fourth video frame and the first region image to obtain a fifth video frame. The fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
The explanation and implementation of this step may refer to the explanation and implementation of step 103, which is not described herein in detail.
In some embodiments of the present application, a user may select a region of interest (region of interest, ROI) during a shooting process, so as to set the selected ROI as a center of a picture after zooming shooting, that is, a center of a picture that is shot by the shooting module in a switching manner. And the user can set the zoom-in and zoom-out of the picture after switching shooting, realize zooming special effects, and then display the zoomed image frames, enrich the video shooting effect and promote the user shooting experience.
Optionally, in step 804, the method further includes, after performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame:
step 805, obtaining a target center position of a region of interest of a user and a zoom multiple.
Optionally, the user may perform a region of interest selection input on the display screen such that the electronic device receives the region of interest selection input, and in response to the region of interest selection input, obtains a target center position of the region of interest. The user may perform a zoom factor selection input on the display screen such that the electronic device receives the zoom factor selection input, and in response to the zoom factor selection input, obtains a zoom factor.
For example, the region of interest selection input may be a click or long press input on a display screen. The electronic device determines a click or a long press point on the display screen as a target center position of the region of interest.
As another example, the zoom multiple selection input may be an input in the form of clicking, long pressing, sliding, or voice of a zoom control displayed on the display screen, to change a zoom multiple value corresponding to the zoom control. And the electronic equipment receives the zoom multiple selection input and determines that the zoom multiple value after the zoom control is changed is the zoom multiple set by the user.
Step 806, calculating a second lateral length and a second longitudinal length after zooming according to the zoom multiple, the first lateral length and the first longitudinal length of the fourth video frame.
Optionally, the electronic device calculates a product of the first lateral extent and the zoom multiple to obtain a second lateral length; and calculating the product of the first longitudinal length and the zooming multiple to obtain a second longitudinal length.
And step 807, intercepting a second area image in the fifth video frame to obtain a sixth video frame. The center position of the second region image is the target center position, the transverse length is the second transverse length, and the longitudinal length is the second longitudinal length.
In this embodiment of the present application, the electronic device intercepts, from the fifth video frame, a second region image with the target center position of the region of interest as the center and the lateral length being the second lateral length and the longitudinal length being the second longitudinal length, and obtains a sixth video frame.
Illustratively, as shown in fig. 10, the electronic device displays a first interface 1001 on the display screen, and displays a fifth video frame on the first interface 1001. And clicking the region of interest on the display screen by the user so as to enable the electronic equipment to acquire the center coordinates of the region of interest and obtain the center position of the target. The electronic device displays a second interface 1002 on the display screen, and displays a zoom control on the second interface 1002. The user slides the zoom control to change the zoom multiple corresponding to the zoom control. And the electronic equipment acquires the zooming multiple after the corresponding change of the zooming control. Taking the example that the zoom multiple is greater than 1, the electronic device displays a sixth video frame enlarged centering on the target center position of the region of interest on the third interface 1003.
In some embodiments of the present application, the user may also select a transition special effect in the shooting process of the shooting module, so that features are added to the fifth video frame, interaction experience with the user is increased, video shooting effects are enriched, and shooting experience of the user is improved.
Optionally, after performing image fusion processing on the fourth video frame and the first area image in step 804 to obtain a fifth video frame, the method further includes: and acquiring a transition special effect identifier selected by a user, and carrying out special effect processing on the fifth video frame according to the transition special effect identifier to obtain a processed fifth video frame.
In some embodiments of the present disclosure, the electronic device may display a plurality of transition special effect identifiers, receive a selection input from a user of a target transition special effect identifier among the plurality of transition special effect identifiers, and determine that the target transition special effect identifier is a transition special effect identifier selected by the user. And carrying out image fusion processing on the special effect template corresponding to the transition special effect identifier selected by the user and the fifth video frame to obtain the processed fifth video frame.
The selection input of the target transition effect identification is used for selecting a transition effect aiming at the fifth video frame. Alternatively, the selection input may be a click, long press, slide, or voice input of the target transition special effect identification. For example, as shown in fig. 11, the electronic device may display a fifth video frame and display a plurality of transition special effects identifications: white field transition, superposition dissolution and cross dissolution. And the electronic equipment clicks the superposition dissolution, performs special effect processing on the fifth video frame, and displays the fifth video frame with the superposition dissolution special effect.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the first video frame and the second video frame which are acquired by the first camera shooting module and the second camera shooting module at the same time are acquired so as to intercept a first area image in the second video frame, and the first video frame and the first area image are subjected to image fusion processing to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
Referring to fig. 12, a block diagram of an electronic device according to an embodiment of the present application is shown. The electronic device can execute the video processing method provided by the embodiment of the application. As shown in fig. 12, the electronic apparatus 1200 includes: a first camera module 1201, a second camera module 1202, an image processing chip 1203, and a display device 1204.
The first camera module 1201 is configured to collect a video frame simultaneously with the second camera module 1202 in a process of switching from the first camera module to the second camera module for video shooting, so as to collect a first video frame.
The second camera module 1202 is configured to collect a video frame simultaneously with the first camera module 1201 in a process of switching from the first camera module to the second camera module for video shooting, so as to collect a second video frame.
The image processing chip 1203 is connected to the first image capturing module 1201 and the second image capturing module 1202. The image processing chip 1203 is configured to execute the video processing method provided in the embodiment of the present application.
In an alternative case, the display device 1204 is connected to the first camera module 1201 and the second camera module 1202. The display device 1204 is used for displaying the first video frame or the second video frame.
In another alternative case, the display device 1204 is also connected to the image processing chip 1203. The display device 1204 is for displaying the fifth video frame.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the electronic equipment acquires the first video frame and the second video frame which are acquired simultaneously by the first camera shooting module and the second camera shooting module so as to intercept a first area image in the second video frame, and performs image fusion processing on the first video frame and the first area image to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
Optionally, the image processing chip 1203 further includes: an image correction module 12031 and an image Fusion (Fusion) module 12032 connected to each other.
The image correction module 12031 is configured to, when the overlapping range of the second field angle of the second image capturing module and the first field angle of the first image capturing module is the second field angle or the first field angle, perform, according to the brightness difference information and the color difference information of the second video frame compared with the first video frame, consistency correction processing of brightness and color on the second video frame, and obtain the processed second video frame.
The image fusion module 12032 is configured to intercept a first area image in a third video frame, where the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame; and the image fusion processing is further performed on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
Optionally, the image fusion module 12032 is further configured to: acquiring focusing distances when the first video frame and the second video frame are acquired, and a first transverse length and a first longitudinal length of the fourth video frame; calculating the center position of the first area image according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module and the relative positions of the first camera module and the second camera module; and according to the center position, the first transverse length and the first longitudinal length, a first area image is intercepted in the third video frame, wherein the transverse length of the first area image is the first transverse length, and the longitudinal length is the first longitudinal length.
Optionally, the image fusion module 12032 is further configured to: performing image fusion processing on a fourth video frame and the first region image to obtain a fifth video frame when the overlapping range of the second field angle of the second camera module and the first field angle of the first camera module is the second field angle or the first field angle;
and reducing transparency of a target image under the condition that the overlapping range of the second view angle and the first view angle is not the second view angle or the first view angle, and performing image fusion processing on the fourth video frame and the first region image to obtain a fifth video frame, wherein the target image is the first region image under the condition that the third video frame is the first video frame, and the target image is the fourth video frame under the condition that the third video frame is the second video frame.
Optionally, the image fusion module 12032 is further configured to: determining an adjustment variable value according to a ratio of the second angle of view to the first angle of view, the ratio being inversely proportional to the adjustment variable value; and reducing the transparency of the target image according to the adjustment variable value.
Further optionally, the image correction module 12031 further includes: a micro control unit (Microcontroller Unit, MCU) 120311, a White Balance (WB) module 120312, a Color correction (Color correction) module 120313, and a Tone mapping (Tone mapping) module 120314.
The micro control unit 120311 is connected to the white balance module 120312, the color correction module 120313, and the tone mapping module 120314, respectively. The micro control unit 12031 is configured to acquire a luminance value, a red component value, a green component value, and a blue component value of each pixel in the first video frame and the second video frame; calculating a first color ratio and a second color ratio of each pixel in the first video frame and the second video frame, wherein the first color ratio is a ratio of a red component value to a green component value, and the second color ratio is a ratio of a blue component value to a green component value; and respectively calculating the white balance gain, the color matrix and the tone gain corresponding to each pixel in the second video frame according to the brightness of the corresponding pixel in the first video frame and the second video frame and the difference value of the first color ratio and the second color ratio.
The white balance module 120312, the color correction module 120313, and the tone mapping module 120314 are sequentially connected. The white balance module 120312 is configured to perform white balance processing on each pixel in the second video frame according to the white balance gain, so as to obtain a first intermediate video frame.
The color correction module 120313 is configured to perform color correction processing on each pixel in the first intermediate video frame according to the color matrix, so as to obtain a second intermediate video frame.
The tone mapping module 120314 is configured to perform tone mapping processing on each pixel in the second intermediate video frame according to the tone gain, so as to obtain a processed second video frame.
In some embodiments of the present application, the image correction module 12031 further includes: demosaicing 120315.
The demosaicing process 120315 is connected to the white balance module 120312 and the color correction module 120313, respectively. The demosaicing 120315 is configured to demosaicing the first intermediate video frame to obtain a processed first intermediate video frame.
Optionally, the image processing chip 1203 further includes: a gamma correction module 12033, a size conversion (Scaler) module 12034, and a Video encoder (Video encoder) 12035.
The gamma correction module 12033 is connected to the image correction module 12031 and a size conversion (Scaler) module 12034, respectively. The gamma correction module 12033 is configured to perform gamma correction processing on the processed second video frame, to obtain the processed second video frame.
The size conversion (Scaler) module 12034 is configured to convert the data format of the second video frame transmitted by the gamma correction module 12033 into a YUV format, and transmit the second video frame after the format conversion to the image fusion module 12032.
The video encoder 12035 is connected to the image fusion module 12032 and the display device, respectively. The video encoder 12035 is configured to perform encoding processing on the fifth video frame, and transmit the fifth video frame to the display device.
In some embodiments of the present application, the electronic device 1200 further includes: and a master control chip 1205. The main control chip is also called an application chip (Application Processor, AP). The main control chip 1205 is connected to the first camera module 1201, the second camera module 1202 and the image processing chip 1203, respectively. The main control chip 1205 is used for acquiring original image data acquired by the first camera module 1201, and performing image preprocessing on the original image data acquired by the first camera module 1201 to obtain a first video frame. The main control chip 1205 is further configured to obtain original image data collected by the second camera module 1202, and perform image preprocessing on the original image data collected by the first camera module 1202 to obtain a second video frame. The main control chip 1205 is used for transmitting the processed raw data (2x Processed Raw,2x represents the data of the two camera modules) to the image processing chip 1203. The processed raw data includes a first video frame and a second video frame.
The main control chip 1205 is further configured to obtain a focusing distance (2 xAF distance) when the first camera module 1201 and the second camera module 1202 collect the first video frame and the second video frame, and transmit the focusing distance to the image processing chip 1203.
The main control chip 1205 is also used to obtain statistical information (2 x 2a (AE/AWB) stats) of the first video frame and the second video frame. The statistical information includes: a luminance value, a red component value, a green component value, and a blue component value for each pixel in the first video frame and the second video frame. The main control chip 1205 is also used to transmit statistical information to the image processing chip 1203.
Accordingly, the image processing chip 1203 is configured to receive three paths of data transmitted by the main control chip, that is, the processed raw data, the focusing distance and the statistical information, so as to execute the video processing method provided by the embodiment of the present application according to the three paths of data received from the main control chip.
In an alternative case, the main control chip 1205 is also directly connected to the display device 1204. The main control chip 1205 is further configured to directly transmit the first video frame or the second video frame to the display device 1204. Correspondingly, the display device 1204 is further configured to directly display the first video frame or the second video frame received from the main control chip 1205. It is to be understood that the main control chip 1205 may generate at least four paths of data, that is, the processed raw data, the focusing distance, the statistical information, and one path of data directly transmitted to the display device 1204.
In summary, in the electronic device provided by the embodiment of the present application, in the process of switching from the first camera module to the second camera module to perform video shooting, the first video frame and the second video frame collected by the first camera module and the second camera module are obtained simultaneously, so as to intercept the first area image in the second video frame, and perform image fusion processing on the first video frame and the first area image, so as to obtain the third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described.
Please refer to fig. 13, which illustrates a block diagram of a video processing apparatus according to an embodiment of the present application. The video processing apparatus is applied to an electronic device, the electronic device including: the first camera module and the second camera module. As shown in fig. 13, the video processing apparatus 1300 includes: an acquisition module 1301, an interception module 1302 and a fusion module 1303.
The acquiring module 1301 is configured to acquire a first video frame acquired by the first camera module and a second video frame acquired by the second camera module in a process of switching from the first camera module to the second camera module for video shooting, where the first video frame and the second video frame are acquired simultaneously;
the intercepting module 1302 is configured to intercept a first area image in a third video frame, where the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
The fusion module 1303 is configured to perform image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, where the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
Optionally, the obtaining module 1301 is further configured to: acquiring focusing distances when acquiring a first video frame and a second video frame, and a first transverse length and a first longitudinal length of a fourth video frame;
the central position of the first area image is calculated according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module and the relative positions of the first camera module and the second camera module;
the capturing module 1302 is further configured to capture a first area image in the third video frame according to the center position, the first lateral length, and the first longitudinal length, where the lateral length of the first area image is the first lateral length, and the longitudinal length is the first longitudinal length.
Optionally, the fusion module 1303 is further configured to:
under the condition that the overlapping range of the second field angle of the second camera shooting module and the first field angle of the first camera shooting module is the second field angle or the first field angle, performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame;
In the case that the overlapping range of the second angle of view and the first angle of view is not the second angle of view or the first angle of view, reducing the transparency of the target image, performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame,
wherein, the target image is the first region image when the third video frame is the first video frame, and the target image is the fourth video frame when the third video frame is the second video frame.
Optionally, the fusion module 1303 is further configured to: determining an adjustment variable value according to the proportion of the second view angle covering the first view angle, wherein the proportion is inversely proportional to the adjustment variable value; and reducing the transparency of the target image according to the adjustment variable value.
Optionally, the video processing apparatus 1300 further includes: and the correction module is used for carrying out consistency correction processing on the brightness and the color of the second video frame according to the brightness difference information and the color difference information of the second video frame compared with the first video frame under the condition that the overlapping range of the second view angle of the second shooting module and the first view angle of the first shooting module is the second view angle or the first view angle, so as to obtain the processed second video frame.
Optionally, the correction module is further configured to:
acquiring a brightness value, a red component value, a green component value and a blue component value of each pixel in a first video frame and a second video frame;
calculating a first color ratio and a second color ratio of each pixel in the first video frame and the second video frame, wherein the first color ratio is a ratio of a red component value to a green component value, and the second color ratio is a ratio of a blue component value to a green component value;
respectively calculating a white balance gain, a color matrix and a tone gain corresponding to each pixel in the second video frame according to the brightness of the corresponding pixel in the first video frame and the second video frame and the difference value of the first color ratio and the second color ratio;
and according to the white balance gain, the color matrix and the tone gain, sequentially performing white balance processing, color correction processing and tone mapping processing on each pixel in the second video frame to obtain a processed second video frame.
Optionally, the acquiring module 1301 is further configured to acquire a target center position and a zoom multiple of the region of interest of the user;
the video processing apparatus 1300 further includes: the calculation module is used for calculating a second transverse length and a second longitudinal length after zooming according to the zooming multiple, the first transverse length and the first longitudinal length of the fourth video frame;
The intercepting module 1302 is further configured to intercept a second area image in the fifth video frame to obtain a sixth video frame, where a center position of the second area image is a target center position, a lateral length is a second lateral length, and a longitudinal length is a second longitudinal length.
Optionally, the electronic device includes a main control chip and an image processing chip, the main control chip is connected with the image processing chip, and the main control chip performs image preprocessing on the original image data collected by the first camera module to obtain a first video frame; the main control chip performs image preprocessing on the original image data acquired by the second camera module to obtain a second video frame; and the main control chip transmits the first video frame and the second video frame to the image processing chip.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the electronic equipment acquires the first video frame and the second video frame which are acquired simultaneously by the first camera shooting module and the second camera shooting module so as to intercept a first area image in the second video frame, and performs image fusion processing on the first video frame and the first area image to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
The video processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile terminal or a non-mobile terminal. By way of example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile terminal may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine or a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 2 to fig. 4, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 14, an electronic device 1400 is further provided according to an embodiment of the present application, including a processor 1401, a memory 1402, and a data processing system provided according to an embodiment of the present application. The memory 1402 stores a program or an instruction capable of running on the processor 1401, and the program or the instruction when executed by the processor 1401 implements the steps of the above-described data processing method embodiment, and can achieve the same technical effects, so that repetition is avoided and no further description is given here. Wherein the electronic device 1400 further comprises: the first camera module and the second camera module.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 15 is a schematic hardware structure of an electronic device implementing an embodiment of the present application. The electronic device 1500 includes, but is not limited to: radio frequency unit 1501, network module 1502, audio output unit 1503, input unit 1504, sensor 1505, display unit 1506, user input unit 1507, interface unit 1508, memory 1509, and processor 1510. The electronic device 1500 further includes a first camera module and a second camera module.
Those skilled in the art will appreciate that the electronic device 1500 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1510 via a power management system so as to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 15 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The processor 1510 is configured to obtain a first video frame collected by the first camera module and a second video frame collected by the second camera module in a process of switching from the first camera module to the second camera module for video shooting, where the first video frame and the second video frame are collected simultaneously;
the method comprises the steps of capturing a first area image in a third video frame, wherein the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
and the image fusion processing is used for carrying out image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the electronic equipment acquires the first video frame and the second video frame which are acquired simultaneously by the first camera shooting module and the second camera shooting module so as to intercept a first area image in the second video frame, and performs image fusion processing on the first video frame and the first area image to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
Optionally, the processor 1510 is further configured to:
acquiring focusing distances when the first video frame and the second video frame are acquired, and a first transverse length and a first longitudinal length of the fourth video frame;
calculating the center position of the first area image according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module and the relative positions of the first camera module and the second camera module;
and according to the center position, the first transverse length and the first longitudinal length, a first area image is intercepted in the third video frame, wherein the transverse length of the first area image is the first transverse length, and the longitudinal length is the first longitudinal length.
Optionally, the processor 1510 is further configured to:
performing image fusion processing on a fourth video frame and the first region image to obtain a fifth video frame when the overlapping range of the second field angle of the second camera module and the first field angle of the first camera module is the second field angle or the first field angle;
reducing the transparency of the target image when the overlapping range of the second view angle and the first view angle is not the second view angle or the first view angle, performing image fusion processing on the fourth video frame and the first region image to obtain a fifth video frame,
Wherein the target image is the first region image when the third video frame is the first video frame, and the target image is the fourth video frame when the third video frame is the second video frame.
Optionally, the processor 1510 is further configured to: determining an adjustment variable value according to a ratio of the second angle of view to the first angle of view, the ratio being inversely proportional to the adjustment variable value; and reducing the transparency of the target image according to the adjustment variable value.
Optionally, the processor 1510 is further configured to, when the overlapping range of the second field angle of the second camera module and the first field angle of the first camera module is the second field angle or the first field angle, perform consistency correction processing on the brightness and color of the second video frame according to the brightness difference information and the color difference information of the second video frame compared with the first video frame, so as to obtain the processed second video frame.
Optionally, the processor 1510 is further configured to:
acquiring a brightness value, a red component value, a green component value and a blue component value of each pixel in the first video frame and the second video frame;
Calculating a first color ratio and a second color ratio of each pixel in the first video frame and the second video frame, wherein the first color ratio is a ratio of a red component value to a green component value, and the second color ratio is a ratio of a blue component value to a green component value;
respectively calculating a white balance gain, a color matrix and a tone gain corresponding to each pixel in the second video frame according to the brightness of the corresponding pixel in the first video frame and the second video frame and the difference value of the first color ratio and the second color ratio;
and according to the white balance gain, the color matrix and the tone gain, sequentially performing white balance processing, color correction processing and tone mapping processing on each pixel in the second video frame to obtain a processed second video frame.
Optionally, the processor 1510 is further configured to:
acquiring a target center position and zoom multiples of a region of interest of a user;
calculating a second transverse length and a second longitudinal length after zooming according to the zooming multiple, the first transverse length and the first longitudinal length of the fourth video frame;
and intercepting a second area image in the fifth video frame to obtain a sixth video frame, wherein the center position of the second area image is the target center position, the transverse length is the second transverse length, and the longitudinal length is the second longitudinal length.
Optionally, the electronic device includes a main control chip and an image processing chip, the main control chip is connected with the image processing chip, and the main control chip performs image preprocessing on original image data collected by the first camera module to obtain a first video frame; the main control chip performs image preprocessing on the original image data acquired by the second camera module to obtain a second video frame; and the main control chip transmits the first video frame and the second video frame to the image processing chip.
In the embodiment of the application, in the process of switching from the first camera shooting module to the second camera shooting module for video shooting, the electronic equipment acquires the first video frame and the second video frame which are acquired simultaneously by the first camera shooting module and the second camera shooting module so as to intercept a first area image in the second video frame, and performs image fusion processing on the first video frame and the first area image to obtain a third video frame. In the technical scheme, the first video frame is acquired by the first camera module before switching, and the second video frame is acquired by the second camera module after switching. Therefore, by performing image fusion processing on the first region image in the first video frame and the second video frame by adopting the video frame acquired by adopting the relatively smaller field angle and the first region image in the video frame acquired by adopting the relatively larger field angle, the image characteristics in the first video frame and the second video frame can be synthesized to a certain extent, the image difference characteristics possibly existing in two images shot by two different shooting modules are eliminated, so that the fifth video frame obtained after the image fusion processing is closer to the first video frame than the second video frame, further the problems of picture problems such as shaking, blocking, excessive color and brightness difference between the video frames shot before and after the switching of the shooting modules can not occur in the larger probability of the video frames shot in the switching shooting process of the shooting modules, the smoothness of the shot pictures in the switching shooting process of the shooting modules is improved, and the video shooting effect is improved.
It should be appreciated that in embodiments of the present application, the input unit 1504 may include a graphics processor (Graphics Processing Unit, GPU) 15041 and a microphone 15042, the graphics processor 15041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1506 may include a display panel 15061, and the display panel 15061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1507 includes at least one of a touch panel 15071 and other input devices 15072. The touch panel 15071 is also referred to as a touch screen. The touch panel 15071 may include two parts, a touch detection device and a touch controller. Other input devices 15072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1509 may be used to store software programs as well as various data. The memory 1509 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1509 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1509 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1510 may include one or more processing units; optionally, the processor 1510 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1510.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the data processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the data processing method embodiment, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the data processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. A video processing method, applied to an electronic device, the electronic device comprising: the method comprises the steps of:
in the process of switching from the first camera module to the second camera module for video shooting, acquiring a first video frame acquired by the first camera module and a second video frame acquired by the second camera module, wherein the first video frame and the second video frame are acquired simultaneously;
intercepting a first area image in a third video frame, wherein the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
and carrying out image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
2. The method according to claim 1, wherein the method further comprises:
acquiring focusing distances when the first video frame and the second video frame are acquired, and a first transverse length and a first longitudinal length of the fourth video frame;
Calculating the center position of the first area image according to the focusing distance, the first field angle of the first camera module, the second field angle of the second camera module and the relative positions of the first camera module and the second camera module;
the capturing a first region image in the third video frame includes: and according to the center position, the first transverse length and the first longitudinal length, a first area image is intercepted in the third video frame, wherein the transverse length of the first area image is the first transverse length, and the longitudinal length is the first longitudinal length.
3. The method according to claim 1, wherein performing image fusion processing on the fourth video frame and the first area image to obtain a fifth video frame includes:
performing image fusion processing on a fourth video frame and the first region image to obtain a fifth video frame when the overlapping range of the second field angle of the second camera module and the first field angle of the first camera module is the second field angle or the first field angle;
reducing the transparency of the target image when the overlapping range of the second view angle and the first view angle is not the second view angle or the first view angle, performing image fusion processing on the fourth video frame and the first region image to obtain a fifth video frame,
Wherein the target image is the first region image when the third video frame is the first video frame, and the target image is the fourth video frame when the third video frame is the second video frame.
4. A method according to claim 3, wherein said reducing the transparency of the target image comprises:
determining an adjustment variable value according to a ratio of the second angle of view to the first angle of view, the ratio being inversely proportional to the adjustment variable value;
and reducing the transparency of the target image according to the adjustment variable value.
5. The method of claim 1, wherein prior to said capturing the first region image in the third video frame, the method further comprises:
and under the condition that the overlapping range of the second field angle of the second camera shooting module and the first field angle of the first camera shooting module is the second field angle or the first field angle, carrying out consistency correction processing on brightness and color on the second video frame according to brightness difference information and color difference information of the second video frame compared with the first video frame, and obtaining the processed second video frame.
6. The method of claim 5, wherein said performing a consistency correction process on the brightness and color of the second video frame according to the brightness difference information and the color difference information of the second video frame compared to the first video frame to obtain a processed second video frame comprises:
acquiring a brightness value, a red component value, a green component value and a blue component value of each pixel in the first video frame and the second video frame;
calculating a first color ratio and a second color ratio of each pixel in the first video frame and the second video frame, wherein the first color ratio is a ratio of a red component value to a green component value, and the second color ratio is a ratio of a blue component value to a green component value;
respectively calculating a white balance gain, a color matrix and a tone gain corresponding to each pixel in the second video frame according to the brightness, the first color ratio and the second color ratio of the corresponding pixel in the first video frame and the second video frame;
and according to the white balance gain, the color matrix and the tone gain, sequentially performing white balance processing, color correction processing and tone mapping processing on each pixel in the second video frame to obtain a processed second video frame.
7. The method of claim 1, wherein after performing the image fusion process on the fourth video frame and the first region image to obtain a fifth video frame, the method further comprises:
acquiring a target center position and zoom multiples of a region of interest of a user;
calculating a second transverse length and a second longitudinal length after zooming according to the zooming multiple, the first transverse length and the first longitudinal length of the fourth video frame;
and intercepting a second area image in the fifth video frame to obtain a sixth video frame, wherein the center position of the second area image is the target center position, the transverse length is the second transverse length, and the longitudinal length is the second longitudinal length.
8. The method of claim 1, wherein the electronic device comprises a main control chip and an image processing chip, the main control chip and the image processing chip are connected, the acquiring the first video frame acquired by the first camera module and the second video frame acquired by the second camera module comprises:
the main control chip performs image preprocessing on the original image data acquired by the first camera module to obtain a first video frame;
The main control chip performs image preprocessing on the original image data acquired by the second camera module to obtain a second video frame;
and the main control chip transmits the first video frame and the second video frame to the image processing chip.
9. A video processing apparatus, characterized by being applied to an electronic device, the electronic device comprising: the device comprises a first camera module and a second camera module, wherein the device comprises:
the acquisition module is used for acquiring a first video frame acquired by the first camera module and a second video frame acquired by the second camera module in the process of switching from the first camera module to the second camera module for video shooting, wherein the first video frame and the second video frame are acquired simultaneously;
the intercepting module is used for intercepting a first area image in a third video frame, wherein the third video frame is a video frame acquired by adopting a relatively large field angle in the first video frame and the second video frame;
the fusion module is used for carrying out image fusion processing on a fourth video frame and the first area image to obtain a fifth video frame, wherein the fourth video frame is a video frame acquired by adopting a relatively smaller field angle in the first video frame and the second video frame.
10. The apparatus of claim 1, wherein the apparatus further comprises:
and the correction module is used for carrying out consistency correction processing on the brightness and the color of the second video frame according to the brightness difference information and the color difference information of the second video frame compared with the first video frame under the condition that the overlapping range of the second view angle of the second shooting module and the first view angle of the first shooting module is the second view angle or the first view angle, so as to obtain the processed second video frame.
11. An electronic device comprising a first camera module, a second camera module, a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method of any one of claims 1 to 8.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any of claims 1 to 8.
CN202311550337.5A 2023-11-20 2023-11-20 Video processing method, video processing device, electronic equipment and medium Pending CN117479025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311550337.5A CN117479025A (en) 2023-11-20 2023-11-20 Video processing method, video processing device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311550337.5A CN117479025A (en) 2023-11-20 2023-11-20 Video processing method, video processing device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117479025A true CN117479025A (en) 2024-01-30

Family

ID=89629207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311550337.5A Pending CN117479025A (en) 2023-11-20 2023-11-20 Video processing method, video processing device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117479025A (en)

Similar Documents

Publication Publication Date Title
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
EP2323374B1 (en) Image pickup apparatus, image pickup method, and program
CN104883504B (en) Open the method and device of high dynamic range HDR functions on intelligent terminal
CN107911682B (en) Image white balance processing method, device, storage medium and electronic equipment
CN106664351A (en) Method and system of lens shading color correction using block matching
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN105141841B (en) Picture pick-up device and its method
CN108012078A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
CN113329172B (en) Shooting method and device and electronic equipment
CN112532881A (en) Image processing method and device and electronic equipment
WO2019128539A1 (en) Image definition obtaining method and apparatus, storage medium, and electronic device
CN113132695A (en) Lens shadow correction method and device and electronic equipment
CN107948511B (en) Brightness of image processing method, device, storage medium and brightness of image processing equipment
US20160292842A1 (en) Method and Apparatus for Enhanced Digital Imaging
CN110930340B (en) Image processing method and device
CN112437237A (en) Shooting method and device
CN116055891A (en) Image processing method and device
CN110602410A (en) Image processing method and device, aerial camera and storage medium
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN117479025A (en) Video processing method, video processing device, electronic equipment and medium
CN113965687A (en) Shooting method and device and electronic equipment
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN111242087B (en) Object identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination