WO2021052292A1 - 视频采集方法和电子设备 - Google Patents

视频采集方法和电子设备 Download PDF

Info

Publication number
WO2021052292A1
WO2021052292A1 PCT/CN2020/115109 CN2020115109W WO2021052292A1 WO 2021052292 A1 WO2021052292 A1 WO 2021052292A1 CN 2020115109 W CN2020115109 W CN 2020115109W WO 2021052292 A1 WO2021052292 A1 WO 2021052292A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
electronic device
scene
time
mode
Prior art date
Application number
PCT/CN2020/115109
Other languages
English (en)
French (fr)
Inventor
葛璐
康凤霞
丁陈陈
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021052292A1 publication Critical patent/WO2021052292A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • This solution relates to the field of electronic technology, in particular to a video acquisition method and electronic equipment.
  • camera applications are one of the most important applications on electronic devices such as mobile phones and tablets. Users can record and share pictures and videos through the camera application on the electronic device. Currently, users have higher and higher requirements for camera applications and photographic effects.
  • time-lapse photography has become one of the important modes of camera applications on electronic devices.
  • the electronic device can collect a group of pictures through the camera, or collect a video through the camera to extract frames from the video to obtain a group of pictures. After that, the electronic device adjusts the playback frame rate of the group of pictures collected during a longer recording time to obtain a video file with a shorter playback time than the recording time.
  • the video file is played, the process of slowly changing objects in a long recording time is compressed into a short playback time for playback, which can present a strange and wonderful scene that is usually undetectable by the naked eye.
  • the user can manually adjust some shooting parameters on the user interface of the camera application to shoot high-quality videos in shooting scenes with different brightness. For example, in a scene with very low light intensity, the user needs to manually increase the exposure time and adjust shooting parameters such as international organization for standardization (ISO) parameters to improve the quality of the video captured in the dark scene .
  • ISO international organization for standardization
  • the embodiments of the application provide a video capture method and electronic device.
  • the electronic device can adjust shooting parameters, shooting methods, and video post-processing algorithms according to different shooting scenes, which can improve the capture in the time-lapse photography mode.
  • the quality of the video can improve the capture in the time-lapse photography mode.
  • an embodiment of the present application provides a video capture method.
  • the method includes: an electronic device displays a camera application interface, wherein the camera application interface includes a time-lapse photography mode icon.
  • the electronic device collects at least one picture and recognizes the first shooting scene according to the at least one picture.
  • the first shooting scene includes a backlit scene, a normal light scene, or a dark scene. Light scene.
  • the electronic device determines a first shooting parameter according to the first shooting scene, and the first shooting parameter is related to the exposure amount.
  • the electronic device collects multiple pictures according to the first shooting parameter, and encodes the multiple pictures to obtain a video file.
  • the frame interval time when the video file is played is less than or equal to the frame interval time when the multiple pictures are collected.
  • the electronic device can adjust the shooting parameters of the camera according to the identified shooting scene, and use the adjusted shooting parameters to collect pictures to form a time-lapse photography video file. Using the corresponding shooting parameters for different shooting scenes can improve the quality of the video captured in the time-lapse photography mode.
  • the method further includes: the electronic device determines the first shooting mode according to the first shooting scene, and the first The shooting mode includes a video recording mode or a photographing mode; the electronic device collecting multiple pictures according to the first shooting parameter includes: the electronic device collecting multiple pictures according to the first shooting parameter and the first shooting mode.
  • the electronic device may also adjust the shooting mode according to the identified shooting scene, and use the adjusted shooting mode to collect pictures to form a time-lapse photography video file.
  • Using corresponding shooting parameters and corresponding shooting methods for different shooting scenes can further improve the quality of the video captured in the time-lapse photography mode.
  • the following describes the process of forming a time-lapse video file in the video mode and the camera mode.
  • the first shooting method is video recording
  • the frame interval at which multiple pictures are collected is the first time interval; the electronic device encodes the multiple pictures to obtain a video file, including: the electronic device extracts pictures from the multiple pictures to obtain a framed picture, and the frame sampling The picture is encoded by the first frame interval time set to obtain the video file, that is, the frame interval time when the video file is played is the first frame interval time.
  • the first frame interval time of the video file is less than or equal to the first time interval.
  • the frame interval time (ie, the first frame interval time) when the obtained time-lapse photography video file is played is less than or equal to the frame interval time of the picture in the collected video, and it is also less than the frame of the sampled picture. Intervals.
  • the frame interval time when the time-lapse video file is played is 1/24 second
  • the frame interval time when the pictures in the collected video are collected is 1/24 second
  • the frame interval time when the framed pictures are collected is half an hour. .
  • the first shooting method is the camera method
  • the frame interval time during which the multiple pictures are collected is the second time interval.
  • the second time interval is greater than the first time interval, and the second time interval is determined by the exposure time, and the first shooting parameter includes the exposure time.
  • the electronic device encodes the multiple pictures to obtain a video file, including: the electronic device encodes the video file through a set second frame interval time, that is, the frame interval time when the video file is played is the second frame interval time .
  • the second frame interval time is shorter than the second time interval.
  • the first shooting mode when the first shooting scene includes the backlit scene or the ordinary light scene, the first shooting mode is the video recording mode; when the first shooting scene includes the dark light scene, the first shooting mode is The way to take pictures.
  • the camera collects a picture every certain time interval, and this time interval can provide sufficient exposure time for the picture to improve the brightness of the picture. Therefore, in a low-light scene, the video acquisition in the time-lapse photography mode is performed by taking pictures. Since the brightness of each frame of the picture is higher, the video quality obtained is also higher.
  • the electronic device detects that the collection of one picture is completed, and then the next picture is collected. Specifically, the electronic device collects a plurality of pictures according to the first shooting parameter and the first shooting mode, including: for each picture in the plurality of pictures, the electronic device detects whether the picture is collected within the frame sampling interval ; If yes, the electronic device collects the next picture; if not, the electronic device collects the next picture after completing the collection of the picture. In this way, it can be ensured that two frames of pictures can be extracted within the frame sampling interval, which reduces the failure of frame sampling due to the frame sampling interval being less than the single frame processing time.
  • the electronic device can recognize the shooting scene based on one picture, or can recognize the shooting scene based on multiple pictures.
  • the electronic device may set time stamps for multiple pictures in sequence, so that the video file composed of the multiple pictures can be played according to the set frame interval when being played. For example, set the time stamp unit as 1/8000 second, and 1 second corresponds to 8000 according to the time stamp unit.
  • the electronic device can assign a time stamp to the pictures received sequentially according to the time stamp unit. Specifically, the electronic device receives the first picture and sets its timestamp to 0. The electronic device receives the second picture and sets its timestamp to 400 timestamp units, and so on to obtain the video file.
  • the first shooting parameter related to the exposure amount may include shutter, exposure time, aperture value, exposure value, ISO, and frame interval.
  • the exposure amount may represent how much light the photoreceptor in the camera receives during the exposure time.
  • the shutter, exposure time, aperture value, exposure value and ISO the electronic device can realize auto focus, auto exposure, auto white balance and 3A (AF, AE and AWB) through algorithms to realize the automatic adjustment of these shooting parameters .
  • the method further includes: the electronic device displays a first control on the time-lapse photography interface, the The first control is used to adjust the second time interval within a value range greater than or equal to the exposure time, and the first shooting parameter includes the second time interval.
  • the camera application interface displayed by the electronic device may include a shooting scene prompt, such as a low-light scene, and the camera application interface may be an interface for time-lapse photography.
  • the interface of the time-lapse photography may include a first control, that is, a control for adjusting the frame interval.
  • the method further includes: the electronic device determines the first video post-processing according to the first shooting scene Algorithm, the first video post-processing algorithm corresponds to the first shooting scene; before the electronic device encodes the multiple pictures to obtain a video file, the method further includes: the electronic device uses the first video post-processing algorithm to Processing multiple pictures to obtain processed multiple pictures; the electronic device encoding the multiple pictures to obtain a video file includes: the electronic device encoding the multiple processed pictures to obtain a video file.
  • the image processing module can use video post-processing algorithms to perform anti-shake, noise reduction and other processing on the collected pictures or videos.
  • the image processing module can perform anti-shake, noise reduction and other processing, and can also perform dark light optimization processing through the dark light optimization algorithm to improve the quality of the pictures collected in the dark light scene.
  • the image processing module can perform anti-shake, noise reduction and other processing, and can also use HDR algorithms for processing.
  • HDR algorithm multiple collected pictures can be combined into one picture.
  • the multiple pictures have different exposure times. For pictures with different exposure times, the brightness of the picture is different, and the details of the picture provided are also different, thereby improving the quality of the picture in the backlit scene.
  • the camera application interface further includes a shooting control, and the electronic device collects multiple pictures according to the first shooting parameter, and encodes the multiple pictures to obtain a video file, including: responding to A second user operation acting on the shooting control, the electronic device collects a plurality of pictures according to the first shooting parameter, and encodes the plurality of pictures to obtain a video file.
  • the pictures in the video file may also include pictures collected by the electronic device before the second user operation is detected.
  • the electronic device may also be in response to a first user operation, that is, perform collection of multiple pictures according to the first shooting parameter, and encode the multiple pictures to obtain a video file. That is, when the first user operation is detected, the electronic device can determine the first shooting parameter and the first shooting mode according to the recognized first shooting scene, and then collect multiple pictures according to the first shooting parameter and the first shooting mode, and The multiple pictures are encoded to obtain a video file.
  • the method further includes: the electronic device collects multiple images according to the first shooting parameter and the first shooting mode. The picture is displayed in preview.
  • the camera application may include a mode loading module, a shooting control module, and a preview display module.
  • the HAL layer may include modules related to the time-lapse photography mode of the camera: a capability enabling module, an image acquisition module, a scene recognition module, and an image processing module.
  • the method provided in the first aspect of the embodiments of the present application can be specifically implemented as follows: First, the camera application can load the time-lapse photography mode in response to the user's operation to start the camera application. After loading the time-lapse photography mode, the user can start the time-lapse photography mode by touching the time-lapse photography mode icon. Then, the HAL layer can identify the shooting scene and report it to the shooting control module of the application layer. The shooting control module can adjust the shooting parameters and shooting methods in the time-lapse photography mode and send it back to the image acquisition module of the HAL layer. Finally, the image acquisition module can collect pictures or videos according to the adjusted shooting parameters and shooting methods.
  • the image processing module can also determine the video post-processing algorithm used according to the identified shooting scene, and use the video post-processing algorithm to process the collected pictures or videos.
  • the processed video data can be encoded by the encoding module. Video files.
  • the preview display module can also obtain the processed video data for preview display.
  • the electronic device may display an icon corresponding to each mode.
  • the embodiment of the present application provides a process for a scene recognition module to recognize a shooting scene according to a picture or video.
  • the scene recognition module can obtain the exposure parameters of the collected picture according to the picture or video, and determine the brightness difference between the bright and dark areas of the picture.
  • the scene recognition module can use the exposure parameters to determine the shooting scene. For example, if the exposure parameter is EV, the camera application can issue a notification for detecting the exposure parameter to the HAL layer.
  • the scene recognition module can calculate the exposure value of the picture and the brightness difference between the bright and dark areas of the picture. When the exposure value is greater than the first threshold, and the brightness difference between the bright and dark areas of the picture is less than the second threshold, the scene recognition module may determine that the shooting scene is a dark light scene.
  • the scene recognition module may determine that the shooting scene is a backlit scene.
  • the scene recognition module may determine that the shooting scene is a normal light scene.
  • the scene recognition module can also use the foregoing principles to recognize shooting scenes corresponding to multiple pictures, so as to more accurately determine the shooting scenes.
  • the method further includes: the electronic device recognizes the shooting scene from the first shooting scene according to the collected picture It becomes the second shooting scene; the electronic device determines the second shooting parameter according to the second shooting scene; the electronic device collects a plurality of pictures according to the second shooting parameter, and encodes the plurality of pictures to obtain the video file.
  • the shooting parameters can be re-adjusted to improve the quality of the captured pictures, thereby improving the quality of the captured video.
  • the electronic device may also determine a second shooting mode according to the second shooting scene, and collect multiple pictures according to the second shooting parameters and the second shooting mode.
  • an embodiment of the present application provides an electronic device, the electronic device includes: one or more processors, a memory, and a display screen; the memory is coupled with the one or more processors, and the memory is used to store the computer Program code, the computer program code includes computer instructions, the one or more processors are used to call the computer instructions to make the electronic device execute: display a camera application interface, wherein the camera application interface includes a time-lapse photography mode icon; response According to a first user operation acting on the time-lapse photography mode icon, at least one picture is collected and a first shooting scene is obtained according to the at least one picture recognition, and the first shooting scene includes a backlit scene, a normal light scene, or a low light scene; The first shooting scene determines the first shooting parameter, and the first shooting parameter is related to the exposure; according to the first shooting parameter, multiple pictures are collected, and the multiple pictures are encoded to obtain a video file. When the video file is played The frame interval time of is less than or equal to the frame interval time of the multiple pictures being collected.
  • the electronic device provided by the second aspect can adjust the shooting parameters of the camera according to the identified shooting scene, and use the adjusted shooting parameters to collect pictures to form a time-lapse photography video file. Using corresponding shooting parameters for different shooting scenes can improve the quality of the video captured in the time-lapse photography mode.
  • the one or more processors are further configured to invoke the computer instruction to cause the electronic device to execute: determine a first shooting mode according to the first shooting scene, and the first shooting mode Including a video recording mode or a photographing mode; the one or more processors are specifically configured to call the computer instruction to cause the electronic device to execute: collect multiple pictures according to the first shooting parameter and the first shooting mode.
  • the frame interval time during which the multiple pictures are collected is the first time interval; the one or more processors are specifically used to call The computer instruction causes the electronic device to execute: extract a picture from the plurality of pictures to obtain a framed picture, the framed picture is encoded by a set first frame interval time to obtain a video file, and the first frame interval time of the video file Less than or equal to the first time interval.
  • the first shooting parameter includes exposure time, and when the first shooting mode is the shooting mode, the frame interval time during which the multiple pictures are collected is the second time interval; The time interval is greater than the first time interval, and the second time interval is determined by the exposure time; the one or more processors are specifically configured to call the computer instructions to make the electronic device execute: through the set second frame Interval time encoding is used to obtain a video file, and the second frame interval time is shorter than the second time interval.
  • the one or more processors are further configured to call the computer instructions to make the electronic device execute: display a first control on the interface of the time-lapse photography, the first control It is used to adjust the second time interval within a value range greater than or equal to the exposure time, and the first shooting parameter includes the second time interval.
  • the first shooting mode when the first shooting scene includes the backlit scene or the ordinary light scene, the first shooting mode is the video recording mode; when the first shooting scene includes the dark light scene, the The first shooting mode is this shooting mode.
  • the one or more processors are further configured to invoke the computer instructions to cause the electronic device to execute: determine a first video post-processing algorithm according to the first shooting scene, and the second A video post-processing algorithm corresponds to the first shooting scene; the multiple pictures are processed using the first video post-processing algorithm to obtain processed multiple pictures; the one or more processors are specifically used to call the computer Instructions to make the electronic device execute: encode the processed multiple pictures to obtain a video file.
  • the camera application interface further includes a shooting control
  • the one or more processors are specifically configured to call the computer instructions to make the electronic device execute: in response to acting on the shooting control
  • multiple pictures are collected according to the first shooting parameter, and the multiple pictures are encoded to obtain a video file.
  • the one or more processors are further configured to invoke the computer instructions to make the electronic device execute: according to the captured pictures, it is recognized that the shooting scene has changed from the first shooting scene to the first shooting scene. 2. A shooting scene; determining a second shooting parameter according to the second shooting scene; collecting a plurality of pictures according to the second shooting parameter, and encoding the plurality of pictures to obtain the video file.
  • the embodiments of the present application provide a chip that is applied to an electronic device.
  • the chip includes one or more processors for invoking computer instructions to make the electronic device execute the first aspect and the first aspect.
  • the embodiments of the present application provide a computer program product containing instructions.
  • the computer program product is run on an electronic device, the electronic device is caused to execute the first aspect and any one of the possible implementation manners in the first aspect. Described method.
  • an embodiment of the present application provides a computer-readable storage medium, including instructions, which when the instructions are executed on an electronic device, cause the electronic device to execute the first aspect and any possible implementation manner in the first aspect Described method.
  • the electronic equipment provided in the second aspect, the chip provided in the third aspect, the computer program product provided in the fourth aspect, and the computer storage medium provided in the fifth aspect are all used to implement the methods provided in the embodiments of the present application. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method, which will not be repeated here.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • FIG. 2 shows a software structure block diagram of an electronic device 100 provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic flowchart of a video capture method provided by an embodiment of the present application.
  • FIGS. 4 to 8 are schematic diagrams of some human-computer interaction interfaces provided by embodiments of the present application.
  • FIG. 9 is a schematic flowchart of a video file collection and preview process provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a video file collection and preview process provided by an embodiment of the present application.
  • the embodiments of the present application provide a video capture method and electronic equipment.
  • the electronic device can recognize the current shooting scene, and the current shooting scene includes, for example, a backlit scene, a dark light scene, or a normal light scene.
  • the electronic device can adjust the shooting parameters and shooting methods of the camera according to the identified shooting scene, and use the adjusted shooting parameters and shooting methods to collect pictures or videos.
  • the electronic device can also determine the adopted video post-processing algorithm according to the identified shooting scene, and use the video post-processing algorithm to process the collected pictures or videos.
  • the processed data can be encoded to obtain video files.
  • the electronic device can adjust shooting parameters, shooting methods, and video post-processing algorithms according to different shooting scenes, which can improve the quality of the video collected in the time-lapse photography mode.
  • the electronic device adjusts the playback frame rate of a group of pictures collected during a longer recording time to obtain a video file with a shorter playback time than the recording time.
  • the following describes the process of adjusting the playback frame rate in photo mode and video mode.
  • the electronic device can take a certain number of pictures at a lower frame rate, and then increase the playback frame rate of these pictures to obtain a video file.
  • the playback frame rate of the pictures refers to the display frequency of the pictures sequentially displayed when the video file composed of these pictures is played.
  • the video file is played, because the playback frame rate is higher than the acquisition frame rate when the image is collected, the slow change of the object during a long recording time is compressed into a short playback time for playback.
  • the video file playback process can present a strange and wonderful scene that is usually undetectable by the naked eye.
  • the electronic device can capture video at a lower frame rate, and the captured video contains some pictures. Then the electronic device extracts frames of the video to retain some pictures, and increases the playback frame rate of these pictures to obtain a video file. When the obtained video file is played, the slowly changing process of the object in a long recording time is compressed into a short playing time and presented.
  • the light that enters the camera from the back of the subject is relatively strong, and the light that enters the camera from the front of the subject is weak. Therefore, the front of the subject in the collected pictures is darker and the back is brighter.
  • the front of the subject in the collected pictures is darker and the back is brighter.
  • the light intensity on the front of the subject reaches a certain threshold, and the light intensity on the back of the subject also reaches a certain threshold.
  • Low light scene refers to a scene with low ambient light intensity, that is, the light intensity on the front and back of the subject is low.
  • the exposure time of the collected picture needs to be increased to increase the brightness of the picture, thereby improving the quality of the picture.
  • the light intensity of the shooting scene is less than the light intensity threshold, it is a dark light scene.
  • Electronic equipment can increase the exposure time to increase the brightness of the captured pictures.
  • the quality of the images in different shooting scenes is different.
  • the brightness difference between the bright and dark areas of the captured picture in a backlit scene is relatively large, for example, greater than the second threshold.
  • the exposure value of a picture collected in a normal light scene is less than the first threshold, and the brightness difference between the bright and dark areas of the picture is less than the second threshold.
  • the exposure value of the picture in the dark scene is greater than the first threshold.
  • the embodiments of the present application are not limited to the above three scenes, and may also include other shooting scenes, such as indoor scenes.
  • the shooting mode of the camera may include a video recording mode and a photographing mode.
  • the video recording method means that the electronic device collects a piece of video at a standard frame rate (for example, collecting 24 pictures per second), and then the electronic device extracts frames of this video, and only retains pictures of some frames.
  • the video files can be obtained after adjusting the playback frame rate of these pictures (that is, determining the frame interval time when the time-lapse video file is played).
  • the opening of flower buds takes about 3 days and 3 nights, which is 72 hours.
  • the electronic device collects video at a standard frame rate (for example, 24 pictures per second), and the recording time is 72 hours.
  • the captured video contains 6,220,800 (that is, 72 ⁇ 60 ⁇ 60 ⁇ 24) pictures.
  • the frame sampling interval set by the electronic device is half an hour, that is, the electronic device extracts one picture every half-hour recording time interval in the captured video.
  • a total of 144 pictures are extracted from the video with a recording time of 72 hours. These pictures are called Draw a picture.
  • the electronic device then arranges the 144 pictures in sequence, and sets the playback frame rate of the video file composed of the collected pictures to a standard playback frame rate, for example, 24 pictures per second.
  • the electronic device When the electronic device is playing a video, it can play the flowering process with a recording time of 3 days and 3 nights within 6 seconds of playing time.
  • the frame interval time when the obtained time-lapse photography video file is played is less than or equal to the frame interval time when the pictures in the collected video are collected, and is also less than the frame interval time when the framed pictures are collected .
  • the frame interval time when the time-lapse photography video file is played is 1/24 second
  • the frame interval time when the pictures in the collected video are collected is 1/24 second
  • the frame interval time when the framed pictures are collected is half Hours.
  • the photographing mode means that the electronic device collects a picture at a certain time interval, and the time interval is the recording time interval, that is, the frame interval time when the picture is collected.
  • the electronic device sets the playback frame rate of the collected pictures to the standard playback frame rate to obtain the video file. For example, in the previous recording process of the bud opening with a recording time of 72 hours, the electronic device collects one picture every half an hour, for a total of 144 pictures.
  • the electronic device sets the playback frame rate of the collected pictures to 24 pictures per second to obtain a video file.
  • the electronic device When the electronic device is playing a video file, it can play the flowering process with a recording time of 72 hours within 6 seconds of playing time.
  • the time interval for collecting pictures can also be referred to as the frame sampling interval.
  • the frame interval time when the obtained time-lapse photography video file is played is less than the frame interval time when the picture is collected.
  • the frame interval when the time-lapse video file is played is 1/24 second, and the frame interval when the picture is collected is half an hour.
  • the electronic device In the video recording mode, the electronic device needs to collect a fixed number of pictures every second. The exposure time of each picture is fixed, or the exposure time is only adjustable within a certain range. In dark scenes, each picture needs a longer exposure time to increase the brightness of the picture. Therefore, if the electronic device performs video capture in the time-lapse photography mode through video recording in a dark light scene, the quality of the obtained video is also low because the brightness of each frame of the picture is low.
  • the camera collects a picture every certain time interval, and this time interval can provide sufficient exposure time for the picture to improve the brightness of the picture. Therefore, in a low-light scene, the video acquisition in the time-lapse photography mode is performed by taking pictures. Since the brightness of each frame of the picture is higher, the video quality obtained is also higher.
  • the shooting parameters may include shutter, exposure time, aperture value (AV), exposure value (EV), ISO, and frame interval. The following are introduced separately.
  • the shutter is a device that controls the length of time the light enters the camera to determine the exposure time of the picture. The longer the shutter stays open, the more light enters the camera and the longer the exposure time of the picture. The shorter the shutter stays open, the less light enters the camera and the shorter the exposure time of the picture.
  • Shutter speed is the time the shutter stays open.
  • the shutter speed is the time interval from the open state to the closed state of the shutter. During this period of time, the object can leave an image on the negative.
  • the faster the shutter speed the clearer the picture of moving objects on the image sensor. Conversely, the slower the shutter speed, the more blurred the picture of moving objects.
  • Exposure time refers to the time required for the shutter to be opened in order to project light onto the photosensitive surface of the photosensitive material of the camera.
  • the exposure time is determined by the sensitivity of the photosensitive material and the illuminance on the photosensitive surface. The longer the exposure time, the more light enters the camera. Therefore, a long exposure time is required in a dark scene, and a short exposure time is required in a backlit scene.
  • the shutter speed is the exposure time.
  • the aperture value is the ratio of the focal length of the lens to the light diameter of the lens. The larger the aperture value, the more light enters the camera. The smaller the aperture value, the less light enters the camera.
  • the exposure value is a combination of the shutter speed and the aperture value to express the light-transmitting ability of the camera lens.
  • the definition of exposure value can be:
  • N is the aperture value
  • t is the exposure time (shutter), in seconds.
  • ISO is used to measure the sensitivity of the film to light.
  • longer exposure time is needed to achieve the same brightness as the sensitive film.
  • sensitive negatives a shorter exposure time is required to achieve the same brightness as the insensitive negatives.
  • a frame of pictures is extracted from the collected video every certain recording time.
  • the certain recording time is the frame interval.
  • the electronic device collects video at a standard frame rate (ie 24 pictures per second), the recording time is 72 hours, and a total of 6,220,800 (ie 72 ⁇ 60 ⁇ 60 ⁇ 24) pictures are collected.
  • the video composed of these pictures has a playback time of 72 hours when it is played.
  • the frame sampling interval of the electronic device is half an hour, that is, the electronic device extracts one picture every half hour of the recording time interval in the collected video, and a total of 144 pictures are extracted from the video with a recording time of 72 hours.
  • the frame interval time at which pictures are collected is the first time interval.
  • the first time interval is 1/24 second.
  • the frame interval is the time difference between collecting two adjacent pictures.
  • the frame sampling interval is the frame interval time when the picture is collected, and the frame sampling interval can be referred to as the second time interval.
  • the second time interval is greater than the first time interval, and the second time interval is determined by the exposure time.
  • the shutter, exposure time, aperture value, exposure value and ISO, electronic equipment can achieve auto focus (AF), automatic exposure (AE), automatic white balance (AWB) through algorithms ) And 3A (AF, AE and AWB) to realize the automatic adjustment of these shooting parameters.
  • Autofocus means that the electronic device obtains the highest picture frequency component by adjusting the position of the focusing lens to obtain higher picture contrast.
  • focusing is a process of continuous accumulation.
  • the electronic device compares the contrast of the pictures taken by the lens at different positions, so as to obtain the position of the lens when the contrast of the picture is the largest, and then determine the focal length of the focus.
  • Automatic exposure means that the electronic device automatically sets the exposure value according to the available light source conditions.
  • the electronic device can automatically set the shutter speed and aperture value according to the exposure value of the currently collected picture to realize the automatic setting of the exposure value.
  • the color of the object will change due to the color of the projected light, and the pictures collected by the electronic device will have different color temperatures under different light colors.
  • White balance is closely related to the surrounding light. Regardless of the ambient light, the camera of the electronic device can recognize white and restore other colors based on white. The automatic white balance can realize that the electronic device adjusts the fidelity of the picture color according to the light source conditions.
  • 3A means auto focus, auto exposure and auto white balance.
  • the shutter, exposure time, aperture value, exposure value, ISO and frame interval, these shooting parameters are all related to the exposure of the picture.
  • the exposure amount may represent how much light the photoreceptor in the camera receives during the exposure time.
  • the video post-processing algorithm is used to process multiple collected pictures or videos.
  • the video post-processing algorithm can perform anti-shake processing to reduce the blurring of the picture caused by the jitter of the electronic device.
  • the video post-processing algorithm can be called by the image processing module provided in the embodiment of the present application to process the collected pictures or videos.
  • the image processing module can use different video post-processing algorithms for processing.
  • the video post-processing algorithm may be a high dynamic range (HDR) algorithm, and the image processing module may use the HDR algorithm to synthesize multiple collected pictures into one picture.
  • the multiple pictures have different exposure times.
  • the image processing module uses the best details of the pictures obtained at each exposure time to synthesize an HDR picture.
  • This HDR picture can be sent to the preview display module as a frame of picture for preview, or it can be sent to the encoding module for encoding.
  • the video post-processing algorithm may include a low-light optimization algorithm to improve the quality of the pictures collected in the low-light scene.
  • the image processing module can use video post-processing algorithms to perform anti-shake, noise reduction and other processing on the collected pictures or videos.
  • the image processing module can perform processing such as anti-shake and noise reduction, and can also perform dark light optimization processing through the dark light optimization algorithm.
  • the image processing module can perform processing such as anti-shake and noise reduction, and can also use HDR algorithms for processing.
  • FIG. 1 shows a schematic diagram of the structure of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface can include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, which includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may couple the touch sensor 180K, the charger, the flash, the camera 193, etc., respectively through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect to other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays a picture or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display pictures, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can realize the collection function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, etc., to realize the image collection module of the HAL layer in the embodiment of the present application.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing, which is converted into a picture or video visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still pictures or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital picture or video signal.
  • ISP outputs digital pictures or video signals to DSP for processing.
  • DSP converts digital pictures or video signals into standard RGB, YUV and other formats of pictures or video signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital pictures or video signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, a picture or video playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example, when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 by way of example.
  • FIG. 2 shows a software structure block diagram of an electronic device 100 provided by an exemplary embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system can be divided into three layers, from top to bottom: application layer, application framework layer, and hardware abstraction layer (HAL) layer. among them:
  • the application layer includes a series of application packages, such as camera applications. Not limited to the camera application, it can also include other applications, such as gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS and other applications.
  • application packages such as camera applications. Not limited to the camera application, it can also include other applications, such as gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS and other applications.
  • the camera application can provide users with a time-lapse photography mode.
  • the camera application may include a mode loading module, a shooting control module, and a preview display module. among them:
  • the mode loading module is used to query the HAL layer for the mode when the camera application is started, and load the mode according to the query result.
  • the modes may include night scene mode, portrait mode, photo mode, short video mode, video mode, time-lapse photography mode, etc.
  • the shooting control module is used to start together with the preview display module when the switch to the time-lapse photography mode is detected, and notify the HAL layer capability enable module to start the modules related to the time-lapse photography mode.
  • the shooting control module may also notify the encoding module and the image processing module in the application framework layer in response to the user's touch operation on the start recording control in the user interface of the camera application.
  • the encoding module starts to obtain the video data stream from the image processing module of the HAL layer.
  • the encoding module can encode the video stream to generate a video file.
  • the shooting control module may also notify the encoding module and the image processing module in the application framework layer in response to the touch operation.
  • the encoding module stops acquiring the video data stream from the image processing module of the HAL layer.
  • the preview display module is used to receive the video data stream from the image processing module or image acquisition module of the HAL layer, and display preview pictures or preview videos on the user interface, and the preview pictures and videos can be updated in real time.
  • the application framework layer (framework, FWK) provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a camera service interface (Camera Service), which may provide a communication interface between the camera application and the HAL layer in the application layer.
  • the application framework layer can also include coding modules.
  • the encoding module may receive a notification from the shooting control module in the camera application to start or stop receiving the video data stream from the image processing module of the HAL layer, and encode the video data stream to obtain a video file.
  • the HAL layer contains modules for providing time-lapse photography modes for camera applications.
  • these modules that provide the time-lapse photography mode can collect pictures or videos, identify the shooting scenes based on the collected pictures or videos, and report the identified shooting scenes.
  • the HAL layer also provides corresponding post-processing algorithms for different shooting scenes.
  • the HAL layer may include modules related to the time-lapse photography mode of the camera: a capability enabling module, an image acquisition module, a scene recognition module, and an image processing module. among them:
  • the capability enabling module is used to start the time-lapse photography mode related modules of the HAL layer after receiving the notification from the shooting control module, such as starting the image acquisition module, scene recognition module and image processing module.
  • the shooting control module in the camera application can notify the capability enabling module of the HAL layer, and the capability enabling module will enable it after receiving the notification. Start the image acquisition module, scene recognition module and image processing module.
  • the image collection module is used to call the camera to collect pictures or videos, and send the collected pictures or videos to the scene recognition module and the image processing module.
  • the scene recognition module is used to perform scene recognition according to the received pictures or videos to identify shooting scenes of different brightness, such as normal light scenes, backlit scenes, and dark light scenes.
  • the image processing module can include video post-processing algorithms, and different brightness shooting scenes can correspond to different video post-processing algorithms.
  • the image processing module can process pictures or videos through video post-processing algorithms to obtain a video data stream, and send the video data stream to the preview display module for preview display, and send it to the encoding module to form a video file.
  • the software architecture of the electronic device shown in FIG. 2 is only an implementation manner of the embodiment of the present application. In actual applications, the electronic device may also include more or fewer software modules, which is not limited here.
  • FIG. 3 is a schematic flowchart of a video capture method provided by an embodiment of the present application.
  • the video capture method includes steps S101 to S124.
  • the camera application can load the time-lapse photography mode in response to the user's operation of starting the camera application.
  • the user can start the time-lapse photography mode by touching the time-lapse photography mode icon.
  • the HAL layer can identify the shooting scene and report it to the shooting control module of the application layer.
  • the shooting control module can adjust the shooting parameters and shooting methods in the time-lapse photography mode and send it back to the image acquisition module of the HAL layer.
  • the image acquisition module can collect pictures or videos according to the adjusted shooting parameters and shooting methods.
  • the image processing module can also determine the video post-processing algorithm used according to the identified shooting scene, use the video post-processing algorithm to process the collected pictures or videos, and the video data stream obtained after processing can be encoded by the encoding module Get the video file.
  • the preview display module can also obtain the video data stream obtained after processing for preview display.
  • Steps S101 to S103 introduce the process of loading the time-lapse photography mode.
  • Steps S104 to S118 introduce the adjustment process of shooting parameters and shooting modes.
  • Steps S119 to S124 describe the process of forming a video file and previewing it. They are described separately below.
  • the user can start the camera application by operating the application icon of the camera application, such as a touch operation.
  • the application icon of the camera application such as a touch operation.
  • the mode loading module queries the HAL layer for the mode.
  • the HAL layer can provide a time-lapse photography mode for camera applications. That is, in this time-lapse photography mode, in the HAL layer, the capability enabling module, the image acquisition module, the scene recognition module, and the image processing module can be activated to perform their respective functions.
  • the HAL layer may also provide other modes for camera applications, such as portrait mode, normal mode, night scene mode, and video recording mode, which are not limited in the embodiments of the present application.
  • the mode loading module may query the capability enabling module for the mode.
  • the capability enable module can respond to the query of the mode loading module and feed back to the mode loading module the modes provided by the HAL layer for the camera application.
  • the modes provided include: time-lapse photography mode, portrait mode, normal mode, night scene mode, and video mode, etc. .
  • the pattern loading module loads the pattern according to the query result.
  • the loaded mode includes the time-lapse photography mode
  • the mode loading module also initializes the corresponding modules of each mode in the application layer and the HAL layer during the loading process.
  • the electronic device 100 may display an icon corresponding to each mode.
  • the shooting control module can notify the capability enabling module, image acquisition module, scene recognition module, and image processing module in the HAL layer to start to perform their respective functions .
  • other modes are similar to the time-lapse photography mode, and the corresponding modules in the HAL layer can be activated in response to the user's touch operation on the icon corresponding to the mode.
  • FIG. 4 is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • the electronic device 100 can display a user interface 10, which is the main screen interface 10 of the electronic device 100.
  • the main screen interface 10 includes a calendar widget 101, a weather widget 102, an application icon 103, a status bar 104, and a navigation bar 105. among them:
  • the calendar widget 101 can be used to indicate the current time, such as date, day of the week, hour and minute information, and so on.
  • the weather widget 102 can be used to indicate the type of weather, such as cloudy to clear, light rain, etc., can also be used to indicate information such as temperature, and can also be used to indicate a location.
  • Application icon 103 can include Wechat (Wechat) icons, Twitter (Twitter) icons, Facebook (Facebook) icons, Weibo (Sina Weibo) icons, QQ (Tencent QQ) icons, YouTube ) Icons, gallery (Gallery) icons, camera (camera) icons, etc., may also include other application icons, which are not limited in the embodiment of the present application. Any application icon can be used to respond to a user's operation, such as a touch operation, so that the electronic device starts the application corresponding to the icon.
  • the status bar 104 may include the name of the operator (for example, China Mobile), time, WI-FI icon, signal strength, and current remaining power.
  • the navigation bar 105 may include system navigation keys such as a return button 1051, a home screen button 1052, and a call out task history button 1053.
  • the main screen interface is an interface displayed by the electronic device 100 after detecting a user operation on the main interface button 1052 on any user interface.
  • the electronic device 100 may display the previous user interface of the current user interface.
  • the electronic device 100 may display the main screen interface 10.
  • the electronic device 100 can display the task that the user has recently opened.
  • the naming of each navigation key can also be other.
  • 1051 can be called Back Button
  • 1052 can be called Home button
  • 1053 can be called Menu Button, which is not limited in the embodiment of the application.
  • the navigation keys in the navigation bar 105 are not limited to virtual keys, and can also be implemented as physical keys.
  • the user can start the camera application by touching the camera icon.
  • the mode loading module executes steps S102 to S103. After the mode loading module completes the loading mode, the electronic device 100 may display an icon corresponding to each mode.
  • the loaded modes include night scene mode, portrait mode, photo mode, short video mode, video mode, time-lapse photography mode, and the like.
  • the electronic device 100 may display the camera application interface 20.
  • the camera application interface 20 may include an icon 204 corresponding to the loaded mode.
  • the icon 204 may include a night scene mode icon 204A, a portrait mode icon 204B, a photo mode icon 204C, a short video mode icon 204D, a recording mode icon 204E, and more icons 204F.
  • the more icons 204F are used to display the icons of the loaded modes.
  • the photographing control module can initiate a mode corresponding to the icon in response to the user's touch operation on any one of the icons 204.
  • the camera application interface 20 may also include a captured image echo control 201, a shooting control 202, a camera switching control 203, a viewfinder 205, a focusing control 206A, a setting control 206B, and a flash switch 206C. among them:
  • the captured image echo control 201 is used for the user to view the captured pictures and videos.
  • the camera switching control 203 is used to switch the camera that collects the image between the front camera and the rear camera.
  • the viewfinder frame 205 is used for real-time preview and display of the collected pictures.
  • the focusing control 206A is used to adjust the focus of the camera.
  • the setting control 206B is used to set various parameters when collecting images.
  • the flash switch 206C is used to turn on/off the flash.
  • the electronic device 100 displays a mode selection interface 30.
  • the mode selection interface 30 may include icons of other loaded modes through step S103. .
  • Mode selection interface 30 can include time-lapse photography mode icon 204G, professional camera mode icon, skin beautification mode icon, slow motion mode icon, professional video mode icon, skin beautification video mode icon, gourmet mode icon, 3D dynamic panorama Mode icon, panoramic mode icon, HDR mode icon, smart object recognition mode icon, streamer shutter mode icon, sound photo mode icon, online translation mode icon, watermark mode icon, and document correction mode icon.
  • the electronic device may open the camera application in response to a user operation, and then display the camera application interface 20 on the display screen.
  • the user can operate any of the above-mentioned mode icons, such as a touch operation to activate the corresponding mode, and the electronic device activates the corresponding module in the HAL layer.
  • the user can touch the time-lapse photography mode icon 204G on the mode selection interface 30 to switch to the time-lapse photography mode.
  • the shooting control module may notify the capability enabling module to enable the modules related to the time-lapse photography mode in the HAL layer, such as the image acquisition module, the scene recognition module, and the image processing module.
  • the first user operation may include a user's touch operation on the time-lapse photography mode icon 204G.
  • the shooting control module and the preview display module may have been activated in step S102, that is, in response to the user starting the camera application, the shooting control module and the preview display module are activated.
  • the shooting control module can be used for shooting control in each mode.
  • the preview display module can be used for preview display in various modes.
  • the shooting control module sends a notification for starting the time-lapse photography mode to the capability enabling module of the HAL layer.
  • the capability enabling module enables the start of the image acquisition module, the scene recognition module, and the image processing module.
  • the image acquisition module collects pictures or videos according to preset shooting parameters and shooting methods.
  • the preset shooting parameters and shooting methods may be preset, for example, may correspond to a normal light scene.
  • the preset shooting mode may be a video recording mode.
  • For the video recording mode reference may be made to the specific description in step S112.
  • the image acquisition module sends the collected pictures or videos to the scene recognition module.
  • the image acquisition module may also send the image or video to the image processing module for processing to obtain the video data stream, and then the image processing module sends the video data stream to the preview display module for preview display.
  • the image processing module can use a post-processing algorithm corresponding to a preset photographing scene (for example, a normal light scene) to process to obtain a video data stream.
  • the video data stream includes a set of sequential pictures, and this set of pictures can be time stamped when they are taken.
  • the time stamp can be reset during the encoding process of this group of pictures by the encoding module.
  • the scene recognition module recognizes the shooting scene according to the picture or video.
  • the scene recognition module can obtain the exposure parameters of the collected picture according to the picture or video, and determine the brightness difference between the bright and dark areas of the picture. Specifically, the scene recognition module can use the exposure parameters to determine the shooting scene. For example, if the exposure parameter is EV, the camera application can issue a notification for detecting the exposure parameter to the HAL layer. When the scene recognition module in the HAL layer receives a notification for detecting exposure parameters, it can calculate the exposure value of the picture and the brightness difference between the bright and dark areas of the picture. When the exposure value is greater than the first threshold, and the brightness difference between the bright and dark areas of the picture is less than the second threshold, the scene recognition module may determine that the shooting scene is a dark light scene.
  • the scene recognition module may determine that the shooting scene is a backlit scene.
  • the scene recognition module may determine that the shooting scene is a normal light scene.
  • the scene recognition module can also use the foregoing principles to recognize shooting scenes corresponding to multiple pictures, so as to more accurately determine the shooting scenes.
  • the embodiment of the present application does not limit the specific algorithm used by the scene recognition module to recognize the shooting scene.
  • the scene recognition module reports the recognized shooting scene to the shooting control module and sends it to the image processing module.
  • step S111 when the scene recognition module recognizes that the current shooting scene is different from the preset shooting scene, step S111 is executed.
  • the preset shooting parameters and shooting modes in the image acquisition module are shooting parameters and shooting modes corresponding to the preset shooting scenes.
  • the preset shooting scene may be a normal light scene
  • the preset shooting parameters and shooting methods in the image acquisition module are the shooting parameters and shooting methods in the ordinary light scene.
  • the shooting control module adjusts shooting parameters and shooting modes according to the received shooting scene.
  • the adjusted shooting parameter may be the first shooting parameter, and the adjusted shooting mode is the first shooting mode.
  • the shooting parameters may include any one or more of the following: shutter, exposure time, aperture value, exposure value, ISO, and frame interval.
  • the shooting mode can include video mode and photo mode.
  • the shooting control module can set a shooting parameter and shooting mode corresponding to each shooting scene.
  • the first shooting parameter is a shooting parameter corresponding to a normal light scene
  • the first shooting mode is a shooting mode corresponding to ordinary light.
  • the first shooting parameter may be a shooting parameter corresponding to a backlit scene
  • the first shooting mode may be a shooting mode corresponding to a backlit scene.
  • the first shooting parameter may also be a shooting parameter corresponding to a low light scene
  • the first shooting mode may also be a shooting mode corresponding to a low light scene.
  • the shooting control module determines that the adjusted shooting parameter is the shooting parameter corresponding to the backlit scene according to the above-mentioned corresponding relationship, and the adjusted shooting mode is the shooting mode corresponding to the backlit scene.
  • the image acquisition module can collect pictures at a standard frame rate (for example, 24 pictures per second) to form a video, then each frame of the video collected by the image acquisition module The brightness of the picture is sufficient.
  • a standard frame rate for example, 24 pictures per second
  • the shooting environment has insufficient light intensity. Due to the insufficient exposure time of each picture, the brightness of the picture is not enough. Therefore, in a dark scene, the image acquisition module can adjust the exposure time of each picture to be longer than the exposure time corresponding to the standard frame rate by taking pictures, so as to obtain a series of pictures with higher brightness. In order to get a higher quality video.
  • the exposure parameters such as shutter, exposure time, exposure value, and ISO can be realized by algorithms such as auto focus, auto exposure, auto white balance and 3A, so as to realize the automatic adjustment of these parameters.
  • the shooting control module can calculate the corresponding exposure values in different shooting scenes.
  • the shooting control module can automatically set the shutter speed and aperture value according to the exposure value of the collected picture, so that the shooting control module can automatically set the shooting parameters according to the shooting scene.
  • the shooting control module may calculate a new exposure parameter according to the exposure value corresponding to the shooting scene.
  • the new exposure parameters can include a new shutter, exposure time, exposure value, and ISO.
  • the shooting control module applies the new exposure parameters to the camera, and then the shooting control module obtains the exposure value again. If the exposure value does not meet the requirements, the camera control module can readjust the exposure parameters until the obtained exposure value meets the requirements.
  • the frame interval can be affected by the shooting scene.
  • the frame sampling interval may be set on the user interface of the camera application in response to a user operation.
  • the scene recognition module can determine the processing time of a single frame, that is, the time required for the image acquisition module and the image processing module to complete the image acquisition and processing. Then, in the corresponding shooting scene, the minimum value of the frame sampling interval that can be set on the application interface of the camera application is greater than or equal to the single frame processing time in the scene.
  • the frame sampling interval in the photographing mode, is the frame interval time during which multiple pictures are collected, that is, the second time interval.
  • the processing time for a single frame is 1 second.
  • the scene recognition module reports the recognized dark light scene to the shooting control module in the camera application, the control for setting the frame interval on the application interface of the camera application, the minimum value of the frame interval that can be set Greater than or equal to 1 second.
  • the scene recognition module can also determine the exposure value based on the picture or video, and determine the new exposure time based on the exposure value.
  • the scene recognition module can determine the new exposure time according to the following rules: the smaller the exposure value, the shorter the new exposure time.
  • the scene recognition module can report the new exposure time to the shooting control module of the camera application, and the shooting control module then issues the new exposure time to the image acquisition module.
  • the image acquisition module can set the exposure time of each picture when the pictures are collected to the new exposure time.
  • the scene recognition module may also determine the single frame processing time according to the new exposure time.
  • the scene recognition module reports the new exposure time, and the shooting control module determines the single frame processing time according to the new exposure time.
  • the camera application instructs the HAL layer to collect multiple pictures according to the single frame time interval, which is described in detail with reference to FIG. 9.
  • the new exposure time may also be determined by the shooting control module and then sent to the scene recognition module.
  • the shooting method adopted is the video mode.
  • the exposure time of each picture can be determined according to the recorded frame rate.
  • the scene recognition module determines the processing time of a single frame in the normal light scene and the back light scene according to the exposure time.
  • the shooting parameters corresponding to the ordinary light scene may include auto focus, auto exposure, automatic white balance and 3A, and may also include the adjustable range of the frame interval displayed on the user interface in the ordinary light scene.
  • the shooting parameters corresponding to the backlit scene can include auto focus, auto exposure, auto white balance and 3A, and can also include the adjustable range of the frame interval displayed on the user interface in the backlit scene.
  • the shooting parameters corresponding to the dark light scene can include auto focus, auto exposure, auto white balance and 3A, and can also include the adjustable range of the frame interval displayed on the user interface in the dark light scene.
  • the shooting control module sends the first shooting parameter and the first shooting mode to the image acquisition module.
  • the shooting control module sets the shooting parameters to the shooting parameters corresponding to the backlit scene, sets the shooting mode to the shooting mode corresponding to the backlit scene, and maps the shooting parameters corresponding to the backlit scene to the backlit scene
  • the shooting mode is sent to the image acquisition module.
  • the shooting control module sets the shooting parameters to the shooting parameters corresponding to the dark light scene, sets the shooting mode to the shooting mode corresponding to the dark light scene, and sets the shooting parameters corresponding to the dark light scene
  • the shooting mode corresponding to the dark light scene is issued to the image acquisition module.
  • the image acquisition module collects a picture or video according to the first shooting parameter and the first shooting mode.
  • the image acquisition module sends the collected pictures or videos to the image processing module.
  • the picture or video may be collected and sent in real time. That is, after the image acquisition module collects a picture, the picture can be sent to the image processing module.
  • the image processing module processes the picture or video according to the identified shooting scene to obtain a video data stream.
  • the image processing module can set different post-processing algorithms for different shooting scenes.
  • the following introduces the post-processing algorithm set for each shooting scene image processing module in normal light scenes, backlit scenes and dark light scenes.
  • the image processing module can use video post-processing algorithms to perform anti-shake, noise reduction and other processing on the collected pictures or videos.
  • the image processing module can use HDR algorithms for processing, and can also perform processing such as anti-shake and noise reduction.
  • HDR algorithms for processing, and can also perform processing such as anti-shake and noise reduction.
  • FIG. 5 is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • the user can touch the time-lapse photography mode icon 204G to switch to the time-lapse photography mode.
  • the electronic device 100 displays the camera application interface 20.
  • the camera application interface 20 includes a time-lapse photography mode prompt 207 and a close control 208. The close control 208 is used to close the time-lapse photography mode.
  • the modules related to the time-lapse photography mode (image acquisition module, image processing module and scene recognition module) in the HAL layer can be closed, And the photography mode prompt 207 is no longer displayed, and the electronic device 100 displays the interface described in (B) in FIG. 4.
  • the recognized shooting scene may be reported to the camera application, and the application interface of the camera application may include the recognized shooting scene.
  • the camera application interface 20 may also include a shooting scene prompt 209.
  • the shooting scene prompt 209 may prompt: a dark light scene.
  • the quality of the picture stream displayed in the viewfinder frame 205 is higher than the quality of the picture stream before adjustment.
  • the video post-processing algorithm can also be adjusted accordingly, so that the brightness range of the preview image in the viewfinder 205 is achieved.
  • the brightness range of the picture displayed in the viewfinder frame 205 is larger than the brightness range before the adjustment.
  • step S112 if the adjustment process of the shooting parameters and shooting mode is not completed, for example, when the user touches the close control 208 when step S112 is executed, the execution of step S112 and subsequent steps will be stopped, and the electronic device 100 will display (B ) The camera application interface 20 shown.
  • the camera application may display a control for setting the frame sampling interval on the camera application interface 20 according to the recognized shooting scene. That is, the frame interval control 211.
  • FIG. 6, is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • the camera application interface 20 may further include a frame interval control 211 and a prompt 212. among them:
  • Prompt 212 can be used as a reminder: click on the icon to adjust the frame interval, long press to view details.
  • the frame interval control 211 is used to adjust the frame interval of the captured video. For details, please refer to the example described in FIG. 7.
  • the electronic device 100 may display the frame interval detail interface 40, including a function prompt 401, which can be used to remind: Larger, the captured video is compressed into a shorter time to play. Different frame interval applies to different scenes, click the control to view scene details.
  • the frame sampling interval details interface 40 may also include a go to view option 402 for adjusting the frame sampling interval.
  • FIG. 7 is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • the electronic device 100 displays the camera application interface 20, and the camera application interface 20 includes a frame interval adjustment control 213.
  • the electronic device 100 displays the camera application interface 20, and the camera application interface 20 includes the frame interval adjustment control 213.
  • the camera application interface 20 displayed by the electronic device includes a shooting scene prompt 209 to prompt a dark light scene.
  • the camera application interface may be time-lapse photography. Interface.
  • the interface of the time-lapse photography may include a first control, that is, a frame interval adjustment control 213.
  • the shooting scene identified by the scene recognition module in step S110 is a dark light scene
  • the processing time of a single frame is 1 second.
  • the minimum value of the frame sampling interval that can be set by the frame sampling interval adjustment control 213 is greater than or equal to the processing time of a single frame in a dark light scene of 1 second. In this way, it can be ensured that two frames of pictures can be extracted within the frame sampling interval, which reduces the failure of frame sampling due to the frame sampling interval being less than the single frame processing time.
  • the first control may include a frame sampling interval adjustment control 213 for adjusting the second time interval within a value range greater than or equal to the exposure time.
  • the user can touch or drag the frame interval adjustment control 213 to adjust the frame interval.
  • Different frame interval can correspond to different shooting scenes.
  • the frame sampling interval adjustment control 213 may include an urban crowd logo 213A, a sunrise and sunset logo 213B, a sky cloud logo 213C, and a building manufacturing logo 213D. among them:
  • the urban crowd indicator 213A is used to indicate that when the frame sampling interval is set to 1 second, the applicable scene is an urban crowd.
  • the sunrise and sunset flag 213B is used to indicate that when the frame sampling interval is set to 10 seconds, the applicable scene is sunrise and sunset.
  • the sky cloud mark 213C is used to indicate that when the frame sampling interval is set to 15 seconds, the applicable scene is sky cloud.
  • the building manufacturing mark 213D is used to indicate that when the frame drawing interval is set to 30 seconds, the applicable scene is building manufacturing.
  • FIG. 8 is a schematic diagram of a human-computer interaction interface provided by an embodiment of the present application.
  • the minimum value of the frame sampling interval that can be set by the frame sampling interval adjustment control 213 is greater than or equal to 0.5 seconds for a single frame processing time in a normal light scene. In this way, it can be ensured that the picture can be extracted within the frame sampling interval, and the situation that the frame sampling failure occurs due to the frame sampling interval being less than the single frame processing time is reduced.
  • the shooting scene prompt 209 prompts a normal light scene.
  • the image processing module sends the video data stream to the preview display module.
  • the preview display module performs preview display according to the video data stream.
  • the picture or video displayed in the preview can be updated in real time.
  • Steps S119 to S124 forming a video file and previewing process.
  • the shooting control module In response to the user's touch operation on the shooting control 202, the shooting control module sends a notification to the encoding module and the HAL layer.
  • the notification is used to make the encoding module receive the video data stream from the image processing module and encode it to form a video file.
  • the shooting control module in response to a user's touch operation on the shooting control 202, may also notify the HAL layer to start recording video, and the image processing module in the HAL layer may send the real-time video data stream to the encoding module.
  • the encoding module After receiving the notification, the encoding module receives the video data stream from the image processing module.
  • the encoding module encodes the video data stream to form a video file.
  • the image processing module sends the video data stream to the preview display module.
  • the preview display module performs preview display according to the video data stream.
  • the picture displayed in the preview is updated in real time.
  • the state of the photographing control 202 is a photographing not started state.
  • the state of the shooting control 202 in response to the user's touch operation on the shooting control 202, changes from a shooting state not started to a shooting state.
  • a control 210 for prompting the video recording time may be displayed on the camera application interface 20 to update the duration of the video recording in real time.
  • the state of the shooting control 202 When the state of the shooting control 202 is the shooting state, the user can perform a touch operation on the shooting control 202 again. In response to the touch operation, the state of the shooting control 202 changes from the shooting state to the not-starting shooting state.
  • the electronic device 100 finishes recording a video in the time-lapse photography mode once, and the encoding module stops receiving the video data stream, and encodes the received video data stream to form a video file.
  • the shooting control 202 may also notify the HAL layer to end the video recording, and the image processing module in the HAL layer may stop sending the video data stream to the encoding module.
  • the encoding module may sequentially set time stamps for multiple pictures, so that the video file composed of the multiple pictures can be displayed at a set playback frame rate, for example, 20 pictures per second.
  • a set playback frame rate for example, 20 pictures per second.
  • according to the set playback frame rate is to set the frame interval time when the video file is played.
  • the electronic device extracts frames from multiple pictures to obtain a framed picture, and the framed picture obtains a video file by encoding the first frame interval time set, and the frame interval time when the obtained video file is played Is the first frame interval time.
  • the first frame interval time is less than or equal to the first time interval, that is, less than the frame interval time during which multiple pictures are collected.
  • the electronic device obtains the video file by encoding the second frame interval time set, and the frame interval time when the obtained video file is played is the second frame interval time.
  • the second frame interval time is less than the second time interval, that is, less than the frame interval time during which multiple pictures are collected.
  • the encoding module can assign timestamps to the pictures received sequentially according to the timestamp unit. Specifically, the encoding module receives the first picture and sets its timestamp to 0. The encoding module receives the second picture and sets its timestamp to 400 timestamp units, and so on to get the video file. Each picture in the video file has a timestamp.
  • the pictures in the video file are displayed in order according to the time stamp from small to large. That is, the electronic device first displays the picture with a timestamp of 0, then displays the picture with a timestamp of 400 after 1/20 second, and so on to realize the projection of the video file.
  • the shooting control module determines that the shooting mode is a photographing mode. In the normal light scene and the backlight scene, the shooting control module determines that the shooting mode is a video mode.
  • FIG. 9 is a schematic flowchart of a video file collection and preview process provided by an embodiment of the present application.
  • the video file and preview process is a process corresponding to the photographing mode, and is executed after step S118 in the embodiment described in FIG. 3, and specifically may be executed after step S120.
  • the electronic device adopts the photographing method to collect and preview the video file.
  • step S121 can be specifically expanded into S121a (including S201 to S207) and S121b (including S210) in the example shown in FIG. 9, step S122 may include S208, step S123 may include S207, and step S124 may be Contains S209.
  • the shooting control module sends a video request (video request) to the image acquisition module.
  • the shooting request is used to request to collect a picture, and the exposure time of the collected picture is generated and stored in the HAL layer.
  • the shooting control module uses a timer to count the frame drawing interval.
  • the frame sampling interval may be determined by the shooting control module or the scene recognition module according to the exposure time, and the exposure time may be determined by the scene recognition module according to the exposure value.
  • the exposure time may be determined by the scene recognition module according to the exposure value.
  • the camera application can set the frame interval of each picture, and complete the picture acquisition and processing in the HAL layer within the frame interval.
  • the frame sampling interval may be set by the user on the camera application interface.
  • the frame sampling interval may be set by the user on the camera application interface.
  • the image acquisition module performs exposure according to the exposure time determined by the shooting control module to acquire a picture.
  • the exposure time may be determined by the scene recognition module according to the exposure value, and the image acquisition module may obtain it from the scene recognition module.
  • the image acquisition module may obtain it from the scene recognition module.
  • the image acquisition module sends the collected pictures to the image processing module.
  • the image acquisition module sends a notification to the shooting control module that one frame of picture has been taken.
  • the image acquisition module may send a notification to the shooting control module that a frame of picture has been taken.
  • the shooting control module receives the notification that one frame of pictures has been collected, it will issue a shooting request for the next frame of pictures after the timing ends. For details, refer to the description of step S210.
  • step S205 may also be executed before step S204.
  • the image processing module uses the video post-processing algorithm of the dark light scene for processing.
  • the image processing module sends the processed picture to the encoding module, and sends it to the preview display module.
  • the encoding module can receive multiple pictures and then encode them.
  • the encoding module encodes the picture.
  • step S208 is not limited to be performed before step S209, and may also be performed after step S209, which is not limited in the embodiment of the present application.
  • the preview display module displays the preview according to the picture.
  • the preview display module can display the picture during the frame sampling interval until the preview display module receives the next picture.
  • the shooting control module detects that the notification is received within the timing time, and then sends a shooting request for the next frame of pictures to the image acquisition module.
  • the shooting control module needs to wait until the notification that the one frame of the picture has been collected is received before performing step S210.
  • the photographing control module sends a photographing request for the next frame of images when waiting for more than a set time (the set time is greater than the timing time) and still has not received the notification that one frame of pictures has been collected.
  • the HAL layer executes the process of collecting and previewing the next frame of pictures according to the received shooting request of the next frame of pictures. For details, please refer to steps S201 to S210.
  • Steps S201 to S210 are executed in a loop until the user touches the shooting control to end the recording, so that multiple pictures can be obtained. These multiple pictures can be used to sequence and encode to obtain a video file.
  • FIG. 10 is a schematic flowchart of a video file collection and preview process provided by an embodiment of the present application.
  • the video file and preview process is a process corresponding to the recording mode, which is executed after step S118 in the embodiment described in FIG. 3, and may be executed after step S120.
  • the electronic device uses the video recording method to collect and preview the video file.
  • step S121 can be specifically expanded into S121c (including S301 to S302) and S121d (including S305 to S308) in the example shown in FIG. 10.
  • Step S122 may include S309
  • step S123 may include S303.
  • S124 may include S304.
  • the shooting control module sends a shooting request to the image acquisition module.
  • the shooting request is used to request to collect the video
  • the frame rate of the collected video can be preset, for example, according to the standard frame rate, that is, 24 pictures are collected per second.
  • the image acquisition module records video according to a preset frame rate to obtain a video.
  • each time the image capture module captures a frame of pictures in the video it reports to the image capture module a notification that a frame of pictures has been captured.
  • the image acquisition module sends the video to the preview display module.
  • the preview display module displays according to the video preview.
  • the image acquisition module may send the picture to the preview display module for preview display after the acquisition of a picture is completed, so as to realize real-time preview display.
  • the image acquisition module extracts frames from the video.
  • the image acquisition module may also send the video to the image processing module, and the image processing module extracts frames of the video according to a preset frame extraction interval.
  • the frame sampling interval may be set by the user on the camera application interface.
  • the frame sampling interval may be set by the user on the camera application interface.
  • the image acquisition module sends the framed video to the image processing module.
  • the image processing module uses a video post-processing algorithm to process the video after frame extraction.
  • the video post-processing algorithm corresponding to the normal light scene is used for processing. If the scene recognition module recognizes that the shooting scene is a backlit scene, the video post-processing algorithm corresponding to the backlit scene is used for processing, for example, the HDR algorithm is used.
  • the image processing module sends the processed video to the encoding module.
  • the encoding module encodes the video.
  • the electronic device can recognize that the shooting scene is changed from the first shooting scene to the second shooting scene according to the collected pictures.
  • the electronic device can determine the second shooting parameters according to the second shooting scene, and can also determine the second shooting mode, and then collect multiple pictures according to the second shooting parameters and/or the second shooting mode, and encode the multiple pictures to obtain a video file.
  • the scene recognition module may also periodically perform scene recognition. For example, after steps S101 to S124, the image acquisition module collects pictures or videos according to shooting parameters and shooting methods to form a video data stream.
  • the scene recognition module may periodically perform scene recognition, for example, perform scene recognition every 10 minutes. If the scene recognition module detects that the shooting scene has changed, it reports the changed shooting scene to the shooting control module.
  • the electronic device 100 re-executes the shooting parameter and shooting mode adjustment process corresponding to similar steps S104 to S118.
  • the first shooting scene is a low-light scene
  • the corresponding first shooting parameter is a shooting parameter of the low-light scene
  • the first shooting mode is a low-light shooting mode (photographing mode).
  • the second shooting scene is a normal light scene
  • the corresponding second shooting parameter is a shooting parameter of a normal light scene
  • the second shooting mode is a normal light shooting mode (video mode).
  • the image acquisition module collects multiple pictures according to the shooting parameters and shooting modes corresponding to the dark light scene to form a video data stream.
  • the scene recognition module reports the normal light scene to the shooting control module.
  • the electronic device 100 re-executes the shooting parameter and shooting mode adjustment process corresponding to steps S104 to S118 to adjust the shooting parameters to the shooting parameters corresponding to the ordinary light scene, and the shooting mode to the shooting mode corresponding to the ordinary light scene.
  • the image processing module adjusts the video post-processing algorithm according to the ordinary light scene to obtain the video data stream.
  • the shooting parameters and shooting method can be readjusted to improve the quality of the captured pictures, thereby improving the quality of the captured video.
  • the embodiment of this application takes video shooting in the time-lapse photography mode as an example for introduction, but the embodiment of this application is not limited to the time-lapse photography mode, and the above-mentioned video capture method can also be used in other video recording modes. The embodiment does not limit this.
  • the identification of shooting scenes provided in the embodiments of the present application to adjust shooting parameters can also be applied in various shooting modes, which is not limited in the embodiments of the present application.
  • all or part of the functions can be implemented by software, hardware, or a combination of software and hardware.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种视频采集方法和电子设备。在该方法中,延时摄影模式下,电子设备可识别出当前拍摄场景,当前拍摄场景例如包含逆光场景、暗光场景或者普通光场景。电子设备可根据识别出的拍摄场景调整摄像头的拍摄参数和拍摄方式,并利用调整后的拍摄参数和拍摄方式采集图片。电子设备还可根据所识别出的拍摄场景确定所采用的视频后处理算法,利用该视频后处理算法对采集到的图片或视频进行处理。处理后得到的图片可被编码得到视频文件。实施本申请实施例提供的技术方案,可提高延时摄影模式下采集的视频的质量。

Description

视频采集方法和电子设备
本申请要求于2019年09月18日提交中国专利局、申请号为201910883504.5、申请名称为“视频采集方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本方案涉及电子技术领域,尤其涉及一种视频采集方法和电子设备。
背景技术
目前,在手机、平板等电子设备上,相机应用是重要的应用之一。用户可通过电子设备上的相机应用记录和分享图片、视频。当前用户对相机应用和摄影效果要求也越来越高。
随着相机相关技术的发展,延时摄影成为电子设备上相机应用的重要模式之一。在延时摄影模式下,电子设备可通过摄像头采集一组图片,或者通过摄像头采集一段视频进行视频抽帧得到一组图片。之后,电子设备将较长的录制时间内采集的这一组图片进行播放帧率调节,以得到播放时间相比于录制时间缩短的视频文件。该视频文件在被播放时,物体在较长的录制时间内缓慢变化的过程被压缩到一个较短的播放时间内进行播放,可呈现出平时用肉眼无法察觉的奇异精彩的景象。
现有的延时摄影模式中,用户可在相机应用的用户界面上手动调节一些拍摄参数,以在不同亮度的拍摄场景下拍摄出高质量的视频。例如,在光线强度很低的场景下,用户需要手动将曝光时间调长,并调节感光度量化规定(international organization for standardization,ISO)参数等拍摄参数,来提高暗光场景下采集的视频的质量。
然而,用户手动调节拍摄参数过程操作繁琐,另外对于非专业摄影的用户来说,手动调节拍摄参数的难度较大,从而降低了操作的便利性。
发明内容
本申请实施例提供了一种视频采集方法和电子设备,在延时摄影模式下,电子设备可根据不同的拍摄场景调整拍摄参数、拍摄方式和视频后处理算法,可以提高延时摄影模式下采集的视频的质量。
第一方面,本申请实施例提供了一种视频采集方法,该方法包括:电子设备显示相机应用界面,其中,该相机应用界面上包括延时摄影模式图标。响应于作用在该延时摄影模式图标的第一用户操作,该电子设备采集至少一个图片并根据该至少一个图片识别得到第一拍摄场景,该第一拍摄场景包括逆光场景、普通光场景或者暗光场景。该电子设备根据该第一拍摄场景确定第一拍摄参数,该第一拍摄参数与曝光量相关。该电子设备根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件,该视频文件被播放时的帧间隔时间小于或等于该多张图片被采集的帧间隔时间。
实施第一方面提供的方法,电子设备可根据识别出的拍摄场景调整摄像头的拍摄参数,并利用调整后的拍摄参数采集图片,以形成延时摄影视频文件。针对于不同的拍摄场景使 用对应的拍摄参数,可以提高延时摄影模式下采集的视频的质量。
结合第一方面,在一些实施例中,该电子设备根据该至少一个图片识别得到第一拍摄场景之后,该方法还包括:该电子设备根据该第一拍摄场景确定第一拍摄方式,该第一拍摄方式包括录像方式或者拍照方式;电子设备根据该第一拍摄参数采集多张图片,包括:该电子设备根据该第一拍摄参数和该第一拍摄方式采集多张图片。
本申请实施例中,电子设备还可根据识别出的拍摄场景调整拍摄方式,并利用调整后的拍摄方式采集图片,以形成延时摄影视频文件。针对于不同的拍摄场景使用对应的拍摄参数和对应的拍摄方式,可以进一步提高延时摄影模式下采集的视频的质量。
下面分别介绍拍摄方式为录像方式和拍照方式下形成延时摄影视频文件的过程。
(1)第一拍摄方式为录像方式
多张图片被采集的帧间隔时间为第一时间间隔;该电子设备将该多张图片进行编码得到视频文件,包括:该电子设备从该多张图片中抽取图片得到抽帧图片,该抽帧图片通过设定的第一帧间隔时间编码得到视频文件,即该视频文件被播放时的帧间隔时间为第一帧间隔时间。该视频文件的第一帧间隔时间小于或等于第一时间间隔。
录像方式下,得到的延时摄影视频文件被播放时的帧间隔时间(即第一帧间隔时间)小于或等于所采集视频中图片被采集的帧间隔时间,也小于抽帧图片被采集的帧间隔时间。例如,延时摄影视频文件被播放时的帧间隔时间为1/24秒,所采集视频中图片被采集的帧间隔时间为1/24秒,抽帧图片被采集的帧间隔时间为半个小时。
(2)第一拍摄方式为拍照方式
该多张图片被采集的帧间隔时间为第二时间间隔。该第二时间间隔大于该第一时间间隔,且该第二时间间隔由该曝光时间确定,该第一拍摄参数包括曝光时间。该电子设备将该多张图片进行编码得到视频文件,包括:该电子设备通过设定的第二帧间隔时间编码得到视频文件,即该视频文件被播放时的帧间隔时间为第二帧间隔时间。该第二帧间隔时间小于该第二时间间隔。
本申请实施例中,该第一拍摄场景包括该逆光场景或该普通光场景时,该第一拍摄方式为该录像方式;该第一拍摄场景包括该暗光场景时,该第一拍摄方式为该拍照方式。
拍照方式中,摄像头每隔一定的时间间隔采集一张图片,该时间间隔可为图片提供足够的曝光时间,以提高图片的亮度。因此,在暗光场景下通过拍照方式进行延时摄影模式的视频采集,由于每帧图片亮度较高,所得到的视频质量也较高。
在拍照方式下,电子设备检测到完成一张图片的采集,才会执行下一张图片的采集。具体的,电子设备根据该第一拍摄参数和该第一拍摄方式采集多张图片,包括:对于该多张图片中的每张图片,该电子设备检测在该抽帧间隔内是否完成采集该图片;若是,该电子设备采集下一张图片;若否,该电子设备在完成采集该图片后才采集下一张图片。这样,可保证在抽帧间隔时间内可抽取到两帧图片,减少由于抽帧间隔小于单帧处理时间产生抽帧失败的情况。
其中,电子设备可根据一张图片识别拍摄场景,也可以根据多张图片识别拍摄场景。
在本申请的一些实施例中,电子设备可为多张图片按照顺序设置时间戳,使得这多张图片所组成的视频文件可按照所设定的被播放时的帧间隔进行播放。例如,设定时间戳的 单位为1/8000秒,1秒钟按照时间戳单位对应8000。电子设备可按照前例中设定的播放帧率为每秒20张图片。那么电子设备可以得到相邻两张图片之间的时间差(即播放时的帧间隔时间),即是时间戳增量8000/20=400时间戳单位,即两张图片之间间隔1/20秒。电子设备可对顺序接收到的图片按照时间戳单位进行时间戳的赋值。具体的,电子设备接收到第一张图片,设置其时间戳为0。电子设备接收到第二张图片,设置其时间戳为400时间戳单位,以此类推得到视频文件。
本申请实施例中,与曝光量相关的第一拍摄参数可包含快门、曝光时间、光圈值、曝光值、ISO和抽帧间隔。本申请实施例中,曝光量可表征摄像头中感光器在曝光时间内接收到光的多少。拍摄参数中,快门、曝光时间、光圈值、曝光值和ISO,电子设备可通过算法实现自动对焦、自动曝光、自动白平衡和3A(AF、AE和AWB),以实现这些拍摄参数的自动调节。
结合第一方面,在一些实施例中,该电子设备根据该第一拍摄场景,确定第一拍摄参数之后,该方法还包括:该电子设备在该延时摄影的界面上显示第一控件,该第一控件用于在大于或等于该曝光时间的取值范围内调节该第二时间间隔,该第一拍摄参数包括该第二时间间隔。
具体的,当检测到作用在延时摄影模式图标的用户操作时,电子设备显示的相机应用界面上可包含拍摄场景提示,例如提示暗光场景,该相机应用界面可以是延时摄影的界面。该延时摄影的界面可包含第一控件,即抽帧间隔调节控件。
结合第一方面,在一些实施例中,该电子设备采集图片并根据采集的图片识别得到第一拍摄场景之后,该方法还包括:该电子设备根据该第一拍摄场景,确定第一视频后处理算法,该第一视频后处理算法与该第一拍摄场景对应;该电子设备将该多张图片进行编码得到视频文件之前,该方法还包括:该电子设备使用该第一视频后处理算法对该多张图片进行处理得到处理后的多张图片;该电子设备将该多张图片进行编码得到视频文件,包括:该电子设备将该处理后的多张图片进行编码得到视频文件。
本申请实施例中,对于普通光场景、逆光场景或者暗光场景,所采用对应的视频后处理算法不同。
①普通光场景,图像处理模块可采用视频后处理算法对采集的图片或视频进行防抖、降噪等处理。
②暗光场景,图像处理模块可进行防抖、降噪等处理,还可以通过暗光优化算法进行暗光优化处理,以提高暗光场景下所采集图片的质量。
③逆光场景,图像处理模块可进行防抖、降噪等处理,还可使用HDR算法进行处理。使用HDR算法可将采集的多张图片合成为一张图片。这多张图片具有不同的曝光时间。不同的曝光时间的图片,图片的亮度不同,所提供图片的细节也不同,从而提高逆光场景下图片的质量。
结合第一方面,在一些实施例中,该相机应用界面还包含拍摄控件,该电子设备根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件,包括:响应于作用在该拍摄控件上的第二用户操作,该电子设备根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件。
在本申请的一些实施例中,视频文件中的图片还可以包含检测到第二用户操作之前电子设备采集的图片。
在本申请的一些实施例中,电子设备还可以是响应于第一用户操作,即执行根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件。即当检测到第一用户操作,电子设备可根据识别到的第一拍摄场景确定第一拍摄参数和第一拍摄方式,然后根据该第一拍摄参数和第一拍摄方式采集多张图片,并将该多张图片进行编码得到视频文件。
本申请实施例中,电子设备根据所述第一拍摄场景,确定第一拍摄参数和第一拍摄方式之后,该方法还包括:电子设备根据第一拍摄参数和第一拍摄方式将采集的多张图片进行预览显示。
本申请实施例中,相机应用可包含模式加载模块、拍摄控制模块和预览显示模块。HAL层可包含与相机的延时摄影模式相关的模块:能力使能模块、图像采集模块、场景识别模块和图像处理模块。
本申请实施例第一方面所提供的方法可具体实现为:首先,相机应用可响应于用户开启相机应用的操作,加载延时摄影模式。加载完成延时摄影模式之后,用户可通过触摸延时摄影模式图标启动延时摄影模式。然后,HAL层可识别拍摄场景并上报给应用程序层的拍摄控制模块。拍摄控制模块可对延时摄影模式下的拍摄参数和拍摄方式进行调整并下发回HAL层的图像采集模块。最后,图像采集模块可根据调整后的拍摄参数和拍摄方式采集图片或视频。图像处理模块还可根据所识别出的拍摄场景确定所采用的视频后处理算法,利用该视频后处理算法对采集到的图片或视频进行处理,处理后得到的视频数据可被编码模块进行编码得到视频文件。预览显示模块还可以获得处理后得到的视频数据,进行预览显示。
本申请实施例中,在模式加载模块完成加载模式后,电子设备可显示每个模式对应的图标。
本申请实施例提供一种场景识别模块根据图片或视频识别拍摄场景的过程。场景识别模块可以根据图片或视频获得所采集图片的曝光参数,并确定图片亮暗两部分区域亮度差值。具体的,场景识别模块可利用曝光参数来确定拍摄场景。例如,曝光参数为EV,相机应用可向HAL层下发用于检测曝光参数的通知。场景识别模块可计算图片的曝光值和图片亮暗两部分区域亮度差值。当曝光值大于第一阈值,且图片亮暗两部分区域亮度差值小于第二阈值时,场景识别模块可确定拍摄场景为暗光场景。当曝光值小于第一阈值,且图片亮暗两部分区域亮度差值大于第二阈值时,场景识别模块可确定拍摄场景为逆光场景。当曝光值小于第一阈值,且图片亮暗两部分区域亮度差值小于第二阈值时,场景识别模块可确定拍摄场景为普通光场景。可选的,场景识别模块还可利用上述原理识别多张图片对应的拍摄场景,以更加准确的确定拍摄场景。
结合第一方面,在一些实施例中,该电子设备采集图片并根据该图片识别得到第一拍摄场景之后,该方法还包括:该电子设备根据采集的图片识别到拍摄场景从该第一拍摄场景变为第二拍摄场景;该电子设备根据该第二拍摄场景,确定第二拍摄参数;该电子设备根据该第二拍摄参数采集多张图片,并将该多张图片进行编码得到该视频文件。
上述的视频采集方法中,在采集过程中如果检测到拍摄场景发生变化,可以重新调整 拍摄参数,以提高所采集图片的质量,从而提高所采集视频的质量。
可选的,电子设备还可以根据该第二拍摄场景确定第二拍摄方式,根据该第二拍摄参数和第二拍摄方式采集多张图片。
第二方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器、存储器和显示屏;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器用于调用该计算机指令以使得该电子设备执行:显示相机应用界面,其中,该相机应用界面上包括延时摄影模式图标;响应于作用在该延时摄影模式图标的第一用户操作,采集至少一个图片并根据该至少一个图片识别得到第一拍摄场景,该第一拍摄场景包括逆光场景、普通光场景或者暗光场景;根据该第一拍摄场景确定第一拍摄参数,该第一拍摄参数与曝光量相关;根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件,该视频文件被播放时的帧间隔时间小于或等于该多张图片被采集的帧间隔时间。
第二方面提供的电子设备,可根据识别出的拍摄场景调整摄像头的拍摄参数,并利用调整后的拍摄参数采集图片,以形成延时摄影视频文件。针对于不同的拍摄场景使用对应的拍摄参数,可以提高延时摄影模式下采集的视频的质量。
结合第二方面,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:根据该第一拍摄场景确定第一拍摄方式,该第一拍摄方式包括录像方式或者拍照方式;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:根据该第一拍摄参数和该第一拍摄方式采集多张图片。
结合第二方面,在一些实施例中,该第一拍摄方式为该录像方式时,该多张图片被采集的帧间隔时间为第一时间间隔;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:从该多张图片中抽取图片得到抽帧图片,该抽帧图片通过设定的第一帧间隔时间编码得到视频文件,该视频文件的第一帧间隔时间小于或等于该第一时间间隔。
结合第二方面,在一些实施例中,该第一拍摄参数包括曝光时间,该第一拍摄方式为该拍照方式时,该多张图片被采集的帧间隔时间为第二时间间隔;该第二时间间隔大于该第一时间间隔,且该第二时间间隔由该曝光时间确定;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:通过设定的第二帧间隔时间编码得到视频文件,该第二帧间隔时间小于该第二时间间隔。
结合第二方面,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:在该延时摄影的界面上显示第一控件,该第一控件用于在大于或等于该曝光时间的取值范围内调节该第二时间间隔,该第一拍摄参数包括该第二时间间隔。
结合第二方面,在一些实施例中,该第一拍摄场景包括该逆光场景或该普通光场景时,该第一拍摄方式为该录像方式;该第一拍摄场景包括该暗光场景时,该第一拍摄方式为该拍照方式。
结合第二方面,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:根据该第一拍摄场景,确定第一视频后处理算法,该第一视频后处理算法与该第一拍摄场景对应;使用该第一视频后处理算法对该多张图片进行处理得到处 理后的多张图片;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:将该处理后的多张图片进行编码,得到视频文件。
结合第二方面,在一些实施例中,该相机应用界面还包含拍摄控件,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:响应于作用在该拍摄控件上的第二用户操作,根据该第一拍摄参数采集多张图片,并将该多张图片进行编码得到视频文件。
结合第二方面,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:根据采集的图片识别到拍摄场景从该第一拍摄场景变为第二拍摄场景;根据该第二拍摄场景,确定第二拍摄参数;根据该第二拍摄参数采集多张图片,并将该多张图片进行编码得到该视频文件。
第三方面,本申请实施例提供了一种芯片,该芯片应用于电子设备,该芯片包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
第四方面,本申请实施例提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
第五方面,本申请实施例提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
可以理解地,上述第二方面提供的电子设备、第三方面提供的芯片、第四方面提供的计算机程序产品和第五方面提供的计算机存储介质均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
下面对本申请实施例用到的附图进行介绍。
图1是本申请实施例提供的一种电子设备100的结构示意图;
图2示出了本申请实施例示例性提供的电子设备100的软件结构框图;
图3是本申请实施例提供的一种视频采集方法的流程示意图;
图4~图8是本申请实施例提供的一些人机交互界面示意图;
图9是本申请实施例提供的一种视频文件采集及预览过程的流程示意图;
图10是本申请实施例提供的一种视频文件采集及预览过程的流程示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请实施例的限制。如在本申请实施例的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请实施例中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。
本申请实施例提供了一种视频采集方法和电子设备。电子设备在延时摄影模式下,可识别出当前拍摄场景,当前拍摄场景例如包含逆光场景、暗光场景或者普通光场景。电子设备可根据识别出的拍摄场景调整摄像头的拍摄参数和拍摄方式,并利用调整后的拍摄参数和拍摄方式采集图片或视频。电子设备还可根据所识别出的拍摄场景确定所采用的视频后处理算法,利用该视频后处理算法对采集到的图片或视频进行处理。处理后得到的数据可被编码得到视频文件。
上述的视频采集方法中,电子设备可根据不同的拍摄场景调整拍摄参数、拍摄方式和视频后处理算法,可以提高延时摄影模式下采集的视频的质量。
下面介绍本申请实施例中相关的一些概念。
(1)延时摄影
延时摄影模式下,电子设备将较长的录制时间内采集的一组图片进行播放帧率调节,以得到播放时间相比于录制时间缩短的视频文件。下面分别介绍拍照方式和录像方式下,播放帧率调节的过程。
在拍照方式下,电子设备可以采用较低的帧率拍摄一定数量的图片,然后将这些图片的播放帧率调高,得到视频文件。图片的播放帧率是指这些图片组成的视频文件被播放时这些图片依次显示的显示频率。视频文件在被播放时,由于播放帧率高于采集图片时的采集帧率,物体在较长的录制时间内缓慢变化的过程被压缩到一个较短的播放时间内进行播放。该视频文件播放过程可呈现出平时用肉眼无法察觉的奇异精彩的景象。
在录像方式下,电子设备可以较低的帧率采集视频,采集到的视频包含一些图片。然后电子设备对视频进行抽帧以保留部分图片,并将这些图片的播放帧率调高,得到视频文件。得到的视频文件在被播放时,物体在较长录制时间内缓慢变化的过程被压缩到一个较短的播放时间内呈现。
(2)逆光场景、普通光场景和暗光场景
逆光场景下,从被摄物体的背面照射进入摄像头的光线相对较强,从被摄物体的正面照射进入摄像头的光线较弱,因此所采集图片中被摄物体的正面比较暗、背面比较亮。以采集人像为例,当被拍摄的人面对镜头时,光线从人的后方射过来。所采集图片中人像的脸部会呈现相对于背景较暗的情况。
普通光场景下,被摄物体正面的光线强度达到一定的阈值,且被摄物体背面的光线强度也达到一定阈值。
暗光场景,是指环境光线强度较低的场景,即被摄物体正面和背面的光线强度都较低。在暗光场景下,所采集图片的曝光时间需要增加来提高图片的亮度,从而提高图片的质量。例如,拍摄场景的光线强度小于光线强度阈值时,为暗光场景。电子设备可增加曝光时间来提高所采集图片的亮度。
本申请实施例中,当摄像头采用相同的拍摄参数进行图片采集时,不同拍摄场景下图像的质量不同。具体的,逆光场景下所采集图片的亮暗两个区域的亮度差值较大,例如大于第二阈值。普通光场景下所采集的图片的曝光值小于第一阈值,且图片亮暗两部分区域亮度差值小于第二阈值。暗光场景下图片的曝光值大于第一阈值。
本申请实施例中,不限于上述三种场景,还可以包含其他的拍摄场景,例如室内场景 等等。
(3)拍摄方式
本申请实施例中,摄像头的拍摄方式可包含录像方式和拍照方式。
其中,录像方式是指,电子设备按照标准的帧率(例如每秒钟采集24张图片)采集一段视频,然后电子设备将这段视频抽帧,仅保留部分帧的图片。这些图片被调整播放帧率(即确定延时摄影视频文件被播放时的帧间隔时间)后可得到视频文件。
例如,花蕾的开放约需3天3夜,即72小时。电子设备按照标准的帧率(例如每秒24张图片)采集视频,录制时间为72小时。采集的视频包含6220800(即72×60×60×24)张图片。电子设备设置的抽帧间隔为半小时,即电子设备在采集的视频中每半小时的录制时间间隔抽取一张图片,共从录制时间为72小时的视频中抽取144张图片,这些图片称为抽帧图片。电子设备再将这144张图片顺序排列,将采集的这些图片组成的视频文件的播放帧率设置为标准的播放帧率,例如每秒钟24张图片。则电子设备在播放视频时可在6秒钟的播放时间内,播放录制时间为3天3夜的开花过程。
本申请实施例中,录像方式下,得到的延时摄影视频文件被播放时的帧间隔时间小于或等于所采集视频中图片被采集的帧间隔时间,也小于抽帧图片被采集的帧间隔时间。例如前例中,延时摄影视频文件被播放时的帧间隔时间为1/24秒,所采集视频中图片被采集的帧间隔时间为1/24秒,抽帧图片被采集的帧间隔时间为半个小时。
拍照方式是指,电子设备每隔一定的时间间隔采集一张图片,该时间间隔即为录制时间间隔,也即是图片被采集的帧间隔时间。电子设备将采集的这些图片的播放帧率设置为标准的播放帧率,得到视频文件。例如,前例录制时间为72小时的花蕾开放的录制过程,电子设备每半个小时采集一张图片,共计采集144张图片。电子设备将采集的这些图片的播放帧率设置为每秒24张图片,得到视频文件。电子设备在播放视频文件时可在6秒钟的播放时间内,播放录制时间为72小时的开花过程。其中,采集图片的时间间隔也可以称为抽帧间隔。
本申请实施例中,拍照方式下,得到的延时摄影视频文件被播放时的帧间隔时间小于图片被采集的帧间隔时间。例如前例中,延时摄影视频文件被播放时的帧间隔时间为1/24秒,图片被采集的帧间隔时间为半个小时。
录像方式中,电子设备每秒钟需要采集固定数量的图片。每张图片的曝光时间固定,或者曝光时间仅在一定范围可调。而暗光场景下,每张图片需要更长的曝光时间以提高图片的亮度。因此,如果在暗光场景下电子设备通过录像方式进行延时摄影模式的视频采集,由于每帧图片亮度较低,所得到的视频质量也较低。
而拍照方式中,摄像头每隔一定的时间间隔采集一张图片,该时间间隔可为图片提供足够的曝光时间,以提高图片的亮度。因此,在暗光场景下通过拍照方式进行延时摄影模式的视频采集,由于每帧图片亮度较高,所得到的视频质量也较高。
(4)拍摄参数
拍摄参数可包含快门、曝光时间、光圈值(aperture value,AV)、曝光值(exposure value,EV)、ISO和抽帧间隔。以下分别进行介绍。
快门是控制光线进入相机时间长短,以决定图片曝光时间的装置。快门保持在开启状 态的时间越长,进入摄像头的光线越多,图片的曝光时间越长。快门保持在开启状态的时间越短,进入摄像头的光线越少,图片的曝光时间越短。
快门速度,是快门保持开启状态的时间。快门速度即是从快门开启状态到关闭状态的时间间隔。在这一段时间内,物体可以在底片上留下影像。快门速度越快,运动物体在图像传感器上呈现的图片越清晰。反之,快门速度越慢,运动的物体呈现的图片就越模糊。
曝光时间是指为了将光投射到摄像头的感光材料的感光面上,快门所要打开的时间。曝光时间由感光材料的感光度和对感光面上的照度确定。曝光时间越长,进入摄像头的光越多。因此暗光场景下需要长的曝光时间,逆光场景下需要短的曝光时间。快门速度即是曝光时间。
光圈值,是镜头的焦距与镜头通光直径的比值。光圈值越大,进入摄像头的光线越多。光圈值越小,进入摄像头的光线越少。
曝光值是快门速度和光圈值组合来表示摄像头的镜头通光能力的一个数值。曝光值的定义可以是:
Figure PCTCN2020115109-appb-000001
其中N是光圈值;t是曝光时间(快门),单位秒。
ISO用于衡量底片对于光的灵敏程度。对于不敏感的底片,需要更长的曝光时间以达到跟敏感底片亮度相同的成像。对于敏感的底片,需要较短的曝光时间以达到的与不敏感的底片亮度相同的成像。
对于录像方式来说,每隔一定的录制时间从采集的视频中抽取一帧图片。该一定的录制时间即为抽帧间隔。例如前例拍摄花蕾开放示例中,电子设备按照标准的帧率(即每秒24张图片)采集视频,录制时间为72小时,共采集6220800(即72×60×60×24)张图片。这些图片所组成的视频在被播放时播放时间也为72小时。电子设备的抽帧间隔为半小时,即电子设备在采集的视频中每半小时的录制时间间隔抽取一张图片,共从录制时间为72小时的视频中抽取144张图片。录像方式下,图片被采集的帧间隔时间为第一时间间隔,例如前例中,每秒24张图片时,第一时间间隔为1/24秒。
对于拍照方式来说,抽帧间隔是采集相邻两张图片的时间差。拍照方式下,抽帧间隔即为图片被采集的帧间隔时间,该抽帧间隔可称为第二时间间隔。第二时间间隔大于第一时间间隔,且第二时间间隔由曝光时间确定。
抽帧间隔越短,得到的视频在播放时画面中动景的移动轨迹越流畅,抽帧间隔越长,得到的视频越在播放时画面中动景的移动轨迹卡顿。
拍摄参数中,快门、曝光时间、光圈值、曝光值和ISO,电子设备可通过算法实现自动对焦(auto focus,AF)、自动曝光(automatic exposure,AE)、自动白平衡(auto white balance,AWB)和3A(AF、AE和AWB),以实现这些拍摄参数的自动调节。
自动对焦是指电子设备通过调整聚焦镜头的位置获得最高的图片频率成分,以得到更高的图片对比度。其中,对焦是一个不断积累的过程,电子设备比较镜头在不同位置下拍摄的图片的对比度,从而获得图片的对比度最大时镜头的位置,进而确定对焦的焦距。
自动曝光是指电子设备根据可用的光源条件自动设置曝光值。电子设备可根据当前所采集图片的曝光值,自动设定快门速度和光圈值,以实现自动设定曝光值。
物体颜色会因投射光线颜色产生改变,在不同光线颜色下电子设备采集出的图片会有不同的色温。白平衡与周围光线密切相关。无论环境光线如何,电子设备的摄像头能识别出白色,并以白色为基准还原其他颜色。自动白平衡可实现电子设备根据光源条件调整图片颜色的保真程度。
3A即自动对焦、自动曝光和自动白平衡。
本申请实施例中,快门、曝光时间、光圈值、曝光值、ISO和抽帧间隔,这些拍摄参数均与图片的曝光量相关。本申请实施例中,曝光量可表征摄像头中感光器在曝光时间内接收到光的多少。
(5)视频后处理算法
本申请实施例中,视频后处理算法用于对采集的多张图片或者视频进行处理。具体的,视频后处理算法可进行防抖处理,以减少电子设备抖动引起的画面模糊的情况。视频后处理算法可被本申请实施例提供的图像处理模块调用对采集的图片或视频进行处理。
对于不同的拍摄场景,图像处理模块可采用不同的视频后处理算法进行处理。具体的,对于逆光场景,视频后处理算法可以是高动态范围(high dynamic range,HDR)算法,图像处理模块可使用HDR算法将采集的多张图片合成为一张图片。这多张图片具有不同的曝光时间。不同的曝光时间的图片,图片的亮度不同,所提供图片的细节也不同。图像处理模块利用每个曝光时间得到的图片的最佳细节来合成得到HDR图片。这个HDR图片可作为一帧图片发送给预览显示模块进行预览,也可发送给编码模块进行编码。对于暗光场景,视频后处理算法可包含暗光优化算法,以提高暗光场景下所采集图片的质量。
对于普通光场景,图像处理模块可采用视频后处理算法对采集的图片或视频进行防抖、降噪等处理。对于暗光场景,图像处理模块可进行防抖、降噪等处理,还可以通过暗光优化算法进行暗光优化处理。而对于逆光场景,图像处理模块可进行防抖、降噪等处理,还可使用HDR算法进行处理。
下面介绍本申请实施例涉及的电子设备。
图1示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬 件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施 例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图片或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图片,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用 处理器等实现采集功能,以实现本申请实施例中HAL层的图像采集模块。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图片或视频。ISP还可以对图片的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图片或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图片或视频信号。ISP将数字图片或视频信号输出到DSP加工处理。DSP将数字图片或视频信号转换成标准的RGB,YUV等格式的图片或视频信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图片或视频信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图片或视频播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中, 音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。 当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振 动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
在本申请实施例中,电子设备100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android***为例,示例性说明电子设备100的软件结构。
请参见图2,图2示出了本申请实施例示例性提供的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。
如图2所示,可将Android***分为三层,从上至下分别为:应用程序层、应用程序框架层和硬件抽象层(hardware abstraction layer,HAL)层。其中:
应用程序层包括一系列应用程序包,例如包含相机应用。不限于相机应用,还可以包含其他一些应用,例如图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
相机应用可为用户提供延时摄影模式。如图2所示,相机应用可包含模式加载模块、拍摄控制模块和预览显示模块。其中:
模式加载模块,用于在相机应用启动时向HAL层查询模式,并根据查询结果加载模式。其中,模式可包含夜景模式、人像模式、拍照模式、短视频模式、录像模式、延时摄影模式等。
拍摄控制模块,用于在检测到切换到延时摄影模式时,与预览显示模块一起启动,并通知HAL层能力使能模块启动延时摄影模式相关的模块。拍摄控制模块还可响应于用户在相机应用的用户界面中对开始录像控件的触摸操作,通知应用框架层中的编码模块和图像处理模块。编码模块接收到通知后从HAL层的图像处理模块开始获取视频数据流。编码模块可对视频流进行编码以生成视频文件。用户在相机应用的用户界面中对结束录像控件执行触摸操作时,拍摄控制模块还可响应于该触摸操作,通知应用框架层中的编码模块和图像处理模块。编码模块接收到通知后从HAL层的图像处理模块停止获取视频数据流。
预览显示模块,用于从HAL层的图像处理模块或者图像采集模块接收视频数据流,并 在用户界面上显示预览图片或预览视频,且预览的图片和视频可实时更新。
应用程序框架层(framework,FWK)为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可包含相机服务接口(Camera Service),该相机服务接口可提供应用程序层中相机应用和HAL层之间的通信接口。应用程序框架层还可包含编码模块。编码模块可接收来自相机应用中拍摄控制模块的通知,以开始或者停止从HAL层的图像处理模块接收视频数据流,并对视频数据流进行编码以得到视频文件。
如图2所示,HAL层包含用于为相机应用提供延时摄影模式的模块。在延时摄影模式下这些提供延时摄影模式的模块可采集图片或视频,并根据采集的图片或视频识别得到拍摄场景,并上报识别到的拍摄场景。HAL层还为不同拍摄场景提供相应的后处理算法。
具体的,如图2所示,HAL层可包含与相机的延时摄影模式相关的模块:能力使能模块、图像采集模块、场景识别模块和图像处理模块。其中:
能力使能模块,用于在接收到拍摄控制模块的通知后启动HAL层的延时摄影模式相关的模块,例如启动图像采集模块、场景识别模块和图像处理模块。具体的,用户在相机应用的用户界面中操作以切换到延时摄影模式时,相机应用中的拍摄控制模块可通知HAL层的能力使能模块,能力使能模块在收到通知后,使能启动图像采集模块、场景识别模块和图像处理模块。
图像采集模块,用于调用摄像头采集图片或视频,并把采集的图片或视频发送到场景识别模块和图像处理模块。
场景识别模块,用于根据接收的图片或视频进行场景识别,以识别出不同亮度的拍摄场景,例如普通光场景、逆光场景和暗光场景。
图像处理模块,可包含视频后处理算法,不同亮度的拍摄场景可分别对应不同的视频后处理算法。图像处理模块可通过视频后处理算法对图片或视频进行处理以得到视频数据流,并将视频数据流发送给预览显示模块进行预览显示,发送给编码模块以形成视频文件。
需要说明的,图2所示的电子设备的软件架构仅仅是本申请实施例的一种实现方式,实际应用中,电子设备还可以包括更多或更少的软件模块,这里不作限制。
下面结合图2所示出的电子设备100的软件结构,具体介绍本申请实施例提供的视频采集方法。请参阅图3,图3是本申请实施例提供的一种视频采集方法的流程示意图。如图3所示,该视频采集方法包含步骤S101~S124。
本申请实施例提供的视频采集方法中,首先,相机应用可响应于用户开启相机应用的操作,加载延时摄影模式。加载完成延时摄影模式之后,用户可通过触摸延时摄影模式图标启动延时摄影模式。然后,HAL层可识别拍摄场景并上报给应用程序层的拍摄控制模块。拍摄控制模块可对延时摄影模式下的拍摄参数和拍摄方式进行调整并下发回HAL层的图像采集模块。最后,图像采集模块可根据调整后的拍摄参数和拍摄方式采集图片或视频。图像处理模块还可根据所识别出的拍摄场景确定所采用的视频后处理算法,利用该视频后处理算法对采集到的图片或视频进行处理,处理后得到的视频数据流可被编码模块进行编 码得到视频文件。预览显示模块还可以获得处理后得到的视频数据流,进行预览显示。
其中:步骤S101~S103介绍加载延时摄影模式过程。步骤S104~S118介绍拍摄参数和拍摄方式调整过程。步骤S119~S124介绍形成视频文件及预览过程。以下分别进行描述。
(一)步骤S101~S103,加载延时摄影模式过程
S101、用户启动相机应用。
本申请实施例中,用户可通过对相机应用的应用图标的操作,例如触摸操作来启动相机应用,具体可参考图4中的(A)的具体描述。
S102、当相机应用启动时,模式加载模块向HAL层查询模式。
本申请实施例中,HAL层可为相机应用提供延时摄影模式。即在该延时摄影模式下,HAL层中,能力使能模块、图像采集模块、场景识别模块和图像处理模块可被启动执行各自的功能。
本申请实施例中,HAL层还可以为相机应用提供其他模式,例如人像模式、普通模式、夜景模式和录像模式等,本申请实施例对此不作限定。
具体的,模式加载模块可向能力使能模块查询模式。能力使能模块可响应于模式加载模块的查询,向模式加载模块反馈HAL层为相机应用提供的模式,例如提供的模式包含:延时摄影模式、人像模式、普通模式、夜景模式和录像模式等。
S103、模式加载模块根据查询结果加载模式。
其中,加载的模式中包含延时摄影模式,模式加载模块在加载过程中还将各个模式在应用程序层和HAL层中对应的模块进行初始化。在初始化之后,电子设备100可显示每个模式对应的图标,具体可参考图4中的(B)和(C)所示出示例的描述。在初始化之后,响应于用户对延时摄影模式对应的图标的触摸操作,拍摄控制模块可通知HAL层中的能力使能模块、图像采集模块、场景识别模块和图像处理模块启动以执行各自的功能。在初始化之后,其他模式类似于延时摄影模式,可响应于用户对模式对应的图标的触摸操作,启动HAL层中相应的模块。
下面介绍加载延时摄影模式过程所涉及的用户界面。请参阅图4,图4是本申请实施例提供的一种人机交互界面示意图。如图4中的(A)所示,电子设备100可显示用户界面10,为电子设备100的主屏幕界面10。主屏幕界面10包括日历小工具(widget)101、天气小工具102、应用程序图标103、状态栏104以及导航栏105。其中:
日历小工具101可用于指示当前时间,例如日期、星期几、时分信息等。
天气小工具102可用于指示天气类型,例如多云转晴、小雨等,还可以用于指示气温等信息,还可以用于指示地点。
应用程序图标103可以包含微信(Wechat)的图标、推特(Twitter)的图标、脸书(Facebook)的图标、微博(Sina Weibo)的图标、QQ(Tencent QQ)的图标、优兔(YouTube)的图标、图库(Gallery)的图标和相机(camera)的图标等,还可以包含其他应用的图标,本申请实施例对此不作限定。任一个应用图标可用于响应用户的操作,例如触摸操作,使得电子设备启动图标对应的应用。
状态栏104中可以包括运营商的名称(例如***)、时间、WI-FI图标、信号强度和当前剩余电量。
导航栏105可以包括:返回按键1051、主界面(home screen)按键1052、呼出任务历史按键1053等***导航键。其中,主屏幕界面为电子设备100在任何一个用户界面检测到作用于主界面按键1052的用户操作后显示的界面。当检测到用户点击返回按键1051时,电子设备100可显示当前用户界面的上一个用户界面。当检测到用户点击主界面按键1052时,电子设备100可显示主屏幕界面10。当检测到用户点击呼出任务历史按键1053时,电子设备100可显示用户最近打开过的任务。各导航键的命名还可以为其他,比如,1051可以叫Back Button,1052可以叫Home button,1053可以叫Menu Button,本申请实施例对此不做限制。导航栏105中的各导航键不限于虚拟按键,也可以实现为物理按键。
用户启动相机应用,可通过触摸相机图标实现。如图4中的(A)所示,响应于用户对相机图标的触摸操作,模式加载模块即执行步骤S102~S103。在模式加载模块完成加载模式后,电子设备100可显示每个模式对应的图标。
示例性的,已加载完成的模式包含夜景模式、人像模式、拍照模式、短视频模式、录像模式、延时摄影模式等。如图4中的(B)所示,电子设备100可显示相机应用界面20。相机应用界面20上可包含已加载完成的模式对应的图标204。图标204可包含夜景模式图标204A、人像模式图标204B、拍照模式图标204C、短视频模式图标204D、录像模式图标204E和更多图标204F。更多图标204F用于显示已加载完成的模式的图标,具体参考图4中的(C)的描述。拍摄控制模块可响应于用户对图标204中任一个图标的触摸操作,启动图标对应的模式。
如图4中的(B)所示,相机应用界面20还可以包含已拍摄图像回显控件201、拍摄控件202、摄像头切换控件203、取景框205、调焦控件206A、设置控件206B和闪光灯开关206C。其中:
已拍摄图像回显控件201,用于用户查看已拍摄的图片和视频。
摄像头切换控件203,用于将采集图像的摄像头在前置摄像头和后置摄像头之间切换。
取景框205,用于对所采集图片进行实时预览显示。
调焦控件206A,用于对摄像头进行调焦。
设置控件206B,用于设置采集图像时的各类参数。
闪光灯开关206C,用于开启/关闭闪光灯。
如图4中的(C)所示,响应于用户对更多图标204F的触摸操作,电子设备100显示模式选择界面30,模式选择界面30可包含通过步骤S103其他的已加载完成的模式的图标。
模式选择界面30可包含延时摄影模式图标204G,还可包含专业拍照模式图标、美肤拍照模式图标、慢动作模式图标、专业录像模式图标、美肤录像模式图标、美食模式图标、3D动态全景模式图标、全景模式图标、HDR模式图标、智能识物模式图标、流光快门模式图标、有声照片模式图标、在线翻译模式图标、水印模式图标和文档校正模式图标。
本申请实施例中,电子设备可以是响应于用户操作,打开相机应用,然后在显示屏上显示相机应用界面20。
用户可对上述任一个模式图标进行操作,例如触摸操作来启动对应的模式,则电子设备在HAL层中启动对应的模块。
(二)步骤S104~S118,拍摄参数和拍摄方式调整过程
S104、用户切换到延时摄影模式。
如图4中的(C)所示,用户可在模式选择界面30上触摸延时摄影模式图标204G来切换到延时摄影模式。
S105、响应于用户对延时摄影模式图标204G的触摸操作,启动拍摄控制模块和预览显示模块。
启动拍摄控制模块之后,拍摄控制模块可通知能力使能模块使能启动HAL层中与延时摄影模式相关的模块,例如图像采集模块、场景识别模块和图像处理模块。
本申请实施例中,第一用户操作可包含用户对延时摄影模式图标204G的触摸操作。
在一种可能的实施方式中,拍摄控制模块和预览显示模块可以是在步骤S102中已经启动,即响应于用户启动相机应用,拍摄控制模块和预览显示模块进行启动。拍摄控制模块,可用于各个模式下的拍摄控制。预览显示模块,可用于各个模式下进行预览显示。
S106、拍摄控制模块向HAL层的能力使能模块发送用于启动延时摄影模式的通知。
S107、能力使能模块使能启动图像采集模块、场景识别模块和图像处理模块。
S108、图像采集模块根据预设的拍摄参数和拍摄方式采集图片或视频。
其中,预设的拍摄参数和拍摄方式可以是预设的,例如可对应普通光场景。预设的拍摄方式可以是录像方式,关于录像方式可参考步骤S112中的具体描述。
S109、图像采集模块将采集的图片或视频发送给场景识别模块。
本申请实施例中,图像采集模块还可以将图像或视频发送给图像处理模块进行处理以得到视频数据流,然后图像处理模块将视频数据流发送到预览显示模块进行预览显示。其中图像处理模块可以使用预设的拍照场景(例如普通光场景)对应的后处理算法进行处理得到视频数据流。
本申请实施例中,视频数据流包含一组有先后顺序的图片,这组图片在被拍摄时可被设定时间戳。在这组图片被编码模块编码过程中可被重新设定时间戳。
S110、场景识别模块根据图片或视频识别拍摄场景。
本申请实施例中,场景识别模块可以根据图片或视频获得所采集图片的曝光参数,并确定图片亮暗两部分区域亮度差值。具体的,场景识别模块可利用曝光参数来确定拍摄场景。例如,曝光参数为EV,相机应用可向HAL层下发用于检测曝光参数的通知。HAL层中的场景识别模块在接收到用于检测曝光参数的通知时,可计算图片的曝光值和图片亮暗两部分区域亮度差值。当曝光值大于第一阈值,且图片亮暗两部分区域亮度差值小于第二阈值时,场景识别模块可确定拍摄场景为暗光场景。当曝光值小于第一阈值,且图片亮暗两部分区域亮度差值大于第二阈值时,场景识别模块可确定拍摄场景为逆光场景。当曝光值小于第一阈值,且图片亮暗两部分区域亮度差值小于第二阈值时,场景识别模块可确定拍摄场景为普通光场景。可选的,场景识别模块还可利用上述原理识别多张图片对应的拍摄场景,以更加准确的确定拍摄场景。
本申请实施例对场景识别模块识别拍摄场景所使用的具体算法不作限定。
S111、场景识别模块将识别到的拍摄场景上报给拍摄控制模块,并发送到图像处理模块。
本申请实施例中,当场景识别模块识别到当前的拍摄场景与预设的拍摄场景不同的情 况下,才执行步骤S111。具体的,图像采集模块中预设的拍摄参数和拍摄方式是预设的拍摄场景对应的拍摄参数和拍摄方式。示例性的,预设的拍摄场景可以是普通光场景,图像采集模块中预设的拍摄参数和拍摄方式是普通光场景下的拍摄参数和拍摄方式。当场景识别模块识别到当前的拍摄场景与普通光场景不同的情况下,才执行步骤S111和后续步骤。当场景识别模块识别到当前的拍摄场景为普通光场景的情况下,无需执行步骤S111。
S112、拍摄控制模块根据接收到的拍摄场景调整拍摄参数和拍摄方式。
其中,调整后的拍摄参数可以是第一拍摄参数,调整后的拍摄方式为第一拍摄方式。拍摄参数可包含以下任一个或多个:快门、曝光时间、光圈值、曝光值、ISO和抽帧间隔。拍摄方式可包含录像方式和拍照方式。
拍摄控制模块可对每种拍摄场景对应设置一种拍摄参数和拍摄方式。示例性的,第一拍摄参数是普通光场景对应的拍摄参数,第一拍摄方式是普通光对应的拍摄方式。
第一拍摄参数可以为逆光场景对应的拍摄参数,第一拍摄方式可以为逆光场景对应的拍摄方式。
第一拍摄参数还可以为暗光场景对应的拍摄参数,第一拍摄方式还可以为暗光场景对应的拍摄方式。
例如,拍摄控制模块在接收拍摄场景为逆光场景时,根据上述对应关系,确定调整后的拍摄参数为逆光场景对应的拍摄参数,调整后的拍摄方式为逆光场景对应的拍摄方式。
下面具体介绍调节拍摄方式的过程。在普通光场景和逆光场景下,由于拍摄环境光线强度足够,图像采集模块可按照标准的帧率(例如每秒采集24张图片)采集图片形成视频,则图像采集模块采集到的视频中每帧图片的亮度足够。而在暗光场景下,如果图像采集模块按照标准的帧率录像,拍摄环境光线强度不足。由于每张图片的曝光时间不足,图片的亮度不够。因此,在暗光场景下,图像采集模块可通过拍照的方式,将每张图片的曝光时间调整为比标准帧率对应的曝光时间更长的时间,从而可得到亮度更高的一系列图片,以得到的质量更高的视频。
下面具体介绍调节拍摄参数的过程。
拍摄参数中,快门、曝光时间、曝光值、ISO这些曝光参数可通过算法实现自动对焦、自动曝光、自动白平衡和3A,以实现这些参数的自动调节。其中,对于快门、曝光时间、光圈值、曝光值和ISO来说,拍摄控制模块可计算不同的拍摄场景下对应的曝光值。拍摄控制模块可根据所采集图片的曝光值,自动设定快门速度和光圈值,以实现拍摄控制模块根据拍摄场景自动设定拍摄参数。具体的,拍摄控制模块可根据拍摄场景对应的曝光值计算新的曝光参数。新的曝光参数可包含新的快门、曝光时间、曝光值、ISO。拍摄控制模块将新的曝光参数应用到相机,之后拍摄控制模块再次获取曝光值。如果曝光值不满足要求摄控制模块可重新调整曝光参数直到得到的曝光值满足要求。
拍摄参数中,抽帧间隔可受拍摄场景的影响。在本申请的一些实施例中,抽帧间隔可以是响应于用户操作在相机应用的用户界面上设定的。对于每个拍摄场景,场景识别模块可确定单帧处理时间,即图像采集模块和图像处理模块完成图像采集和处理所需的时间。则在对应的拍摄场景,在相机应用的应用界面上可设置的抽帧间隔的最小值大于或等于该场景下的单帧处理时间。本申请实施例中,在拍照方式下,抽帧间隔为多张图片被采集的 帧间隔时间,即为第二时间间隔。
例如,在暗光场景下,单帧处理时间为1秒。则在场景识别模块将识别到的暗光场景上报给相机应用中的拍摄控制模块之后,相机应用的应用界面上用于设定抽帧间隔的控件,所能够设定的抽帧间隔的最小值大于或等于1秒。具体可参考图9所描述示例。下面介绍本申请实施例中拍摄场景为暗光场景时确定单帧处理时间的一种示例。对于暗光场景,场景识别模块还可根据图片或视频确定曝光值,并根据曝光值确定新的曝光时间。场景识别模块确定新的曝光时间可以按照以下规则:曝光值越小,新的曝光时间越短。曝光值越大,新的曝光时间越长。在确定新的曝光时间之后,场景识别模块可将该新的曝光时间上报给相机应用的拍摄控制模块,拍摄控制模块再把该新的曝光时间下发到图像采集模块。图像采集模块可将采集图片时每张图片的曝光时间设置为该新的曝光时间。
可选的,在确定曝光时间之后,场景识别模块还可根据新的曝光时间确定单帧处理时间。或者场景识别模块上报新的曝光时间,拍摄控制模块根据该新的曝光时间确定单帧处理时间。在形成视频文件及预览过程中,相机应用按照该单帧时间间隔指令HAL层采集多张图片,具体参考图9具体描述。其中,新的曝光时间还可以是拍摄控制模块确定然后发送给场景识别模块的。
对于拍摄场景为普通光场景和逆光场景,所采用的拍摄方式为录像方式。每张图片的曝光时间可根据录制的帧率确定。场景识别模块根据曝光时间确定普通光场景和逆光场景下的单帧处理时间。
本申请实施例中,普通光场景对应的拍摄参数可包含自动对焦、自动曝光、自动白平衡和3A,还可以包含普通光场景下抽帧间隔在用户界面上显示的可调范围。逆光场景对应的拍摄参数可包含自动对焦、自动曝光、自动白平衡和3A,还可以包含逆光场景下抽帧间隔在用户界面上显示的可调范围。暗光场景对应的拍摄参数可包含自动对焦、自动曝光、自动白平衡和3A,还可以包含暗光场景下抽帧间隔在用户界面上显示的可调范围。
S113、拍摄控制模块将第一拍摄参数和第一拍摄方式发送给图像采集模块。
本申请实施例中,当场景识别模块识别到的场景为普通光场景时,由于与预设的拍摄场景相同,则无需通知拍摄控制模块调整拍摄参数和拍摄方式。当场景识别模块识别到的场景为逆光场景时,拍摄控制模块设置拍摄参数为逆光场景对应的拍摄参数,设置拍摄方式为逆光场景对应的拍摄方式,并将逆光场景对应的拍摄参数和逆光场景对应的拍摄方式下发给图像采集模块。当场景识别模块识别到的场景为暗光场景时,拍摄控制模块设置拍摄参数为暗光场景对应的拍摄参数,设置拍摄方式为暗光场景对应的拍摄方式,并将暗光场景对应的拍摄参数和暗光场景对应的拍摄方式下发给图像采集模块。
S114、图像采集模块根据第一拍摄参数和第一拍摄方式采集图片或视频。
S115、图像采集模块将采集的图片或视频发送给图像处理模块。
本申请实施例中,图片或视频可以是实时采集并发送的。即图像采集模块在采集一张图片后即可将该图片发送给图像处理模块。
S116、图像处理模块根据识别的拍摄场景对图片或视频进行处理得到视频数据流。
本申请实施例中,图像处理模块对不同的拍摄场景可设置不同的后处理算法。下面介绍针对普通光场景、逆光场景和暗光场景中每个拍摄场景图像处理模块所设置的后处理算 法。对于普通光场景和暗光场景,图像处理模块可采用视频后处理算法对采集的图片或视频进行防抖、降噪等处理。而对于逆光场景,图像处理模块可使用HDR算法进行处理,还可以进行防抖、降噪等处理。具体可参考视频后处理算法的概念的具体描述。
下面介绍拍摄参数和拍摄方式调整过程所涉及的用户界面。请参阅图5,图5是本申请实施例提供的一种人机交互界面示意图。如图4中的(C)和图5中的(A)所示,在步骤S104中,用户可触摸延时摄影模式图标204G来切换到延时摄影模式。响应于用户对延时摄影模式图标204G的触摸操作,电子设备100显示相机应用界面20。且该相机应用界面20上包含延时摄影模式提示207和关闭控件208。关闭控件208用于关闭延时摄影模式,响应于用户对关闭控件208的触摸操作,HAL层中,与延时摄影模式相关的模块(图像采集模块、图像处理模块和场景识别模块)可关闭,且摄影模式提示207不再显示,电子设备100显示图4中的(B)所描述界面。
在步骤S110中场景识别模块识别到的拍摄场景之后,可将识别到的拍摄场景上报给相机应用,相机应用的应用界面上可包含该识别到的拍摄场景。如图5中的(B)所示,相机应用界面20上还可以包含拍摄场景提示209。示例性的,该拍摄场景提示209可提示:暗光场景。
调整完拍摄参数和拍摄方式之后,当图像采集模块根据第一拍摄参数和第一拍摄方式采集图片或视频时,与调整前采集的图片相比质量可提高。如图5中的(A)和(B)所示,调整完拍摄参数和拍摄方式后,取景框205中显示的图片流的质量比调整前的图片流的质量高。例如,调整拍摄参数后,所拍摄的图片的曝光时间增加,图片的亮度增加。另外,调整拍摄参数之后,视频后处理算法也可以相应进行调整,使得取景框205中的预览图像的亮度范围。示例性的,调整完视频后处理算法之后,取景框205中显示的图片的亮度范围比调整前亮度范围大。
本申请实施例中,如果拍摄参数和拍摄方式调整过程未执行完成,例如执行到步骤S112时,用户触摸关闭控件208,则停止执行步骤S112及后续步骤,电子设备100显示图4中的(B)所示的相机应用界面20。
下面介绍拍摄场景对抽帧间隔在用户界面上的可调范围的影响所涉及的用户界面。在一种可能的实现方式中,在步骤S110中场景识别模块识别到的拍摄场景之后,相机应用可根据识别到的拍摄场景,在相机应用界面20上显示用于设定抽帧间隔的控件,即抽帧间隔控件211。具体的,请参阅图6,图6是本申请实施例提供的一种人机交互界面示意图。
如图6中的(A)所示,在步骤S110中场景识别模块识别到的拍摄场景为暗光场景之后,相机应用界面20上还可包含抽帧间隔控件211和提示212。其中:
提示212,可用于提示:点击图标,可调节抽帧间隔,长按查看详情。
抽帧间隔控件211,用于调节所拍摄视频的抽帧间隔,具体可参考图7所描述示例。
如图6中的(B)所示,响应于用户对抽帧间隔控件211的长按操作,电子设备100可显示抽帧间隔详情界面40,包含功能提示401,可用于提示:抽帧间隔越大,拍摄的视频被压缩到越短的时间内播放。不同的抽帧间隔适用不同的场景,点击控件查看场景详情。 抽帧间隔详情界面40还可包含去查看选项402,用于调节抽帧间隔。
请参阅图7,图7是本申请实施例提供的一种人机交互界面示意图。如图6中的(B)和图7所示,响应于用户对去查看选项402的触摸操作,电子设备100显示相机应用界面20,相机应用界面20上包含抽帧间隔调节控件213。
如图6中的(A)和图7所示,响应于用户对抽帧间隔控件211的触摸操作,电子设备100显示相机应用界面20,相机应用界面20上包含抽帧间隔调节控件213。
如图7所示,当检测到作用在延时摄影模式图标的用户操作时,电子设备显示的相机应用界面20上包含拍摄场景提示209,提示暗光场景,该相机应用界面可以是延时摄影的界面。该延时摄影的界面可包含第一控件,即抽帧间隔调节控件213。
例如,步骤S110中场景识别模块识别到的拍摄场景为暗光场景,单帧处理时间为1秒,关于单帧处理时间的确定可参考步骤S112中的具体描述。则抽帧间隔调节控件213所能够设置的抽帧间隔的最小值大于或等于暗光场景下的单帧处理时间1秒。这样,可保证在抽帧间隔时间内可抽取到两帧图片,减少由于抽帧间隔小于单帧处理时间产生抽帧失败的情况。
本申请实施例中,第一控件可包含抽帧间隔调节控件213,用于在大于或等于曝光时间的取值范围内调节第二时间间隔。具体的,用户可触摸或拖动抽帧间隔调节控件213以调节抽帧间隔。不同的抽帧间隔可对应不同的拍摄场景。示例性的,如图7所示,抽帧间隔调节控件213可包含都市人潮标识213A、日出日落标识213B、天空云彩标识213C和建筑制造标识213D。其中:
都市人潮标识213A,用于指示在抽帧间隔被设置为1秒时,适用的场景为都市人潮。
日出日落标识213B,用于指示在抽帧间隔被设置为10秒时,适用的场景为日出日落。
天空云彩标识213C,用于指示在抽帧间隔被设置为15秒时,适用的场景为天空云彩。
建筑制造标识213D,用于指示在抽帧间隔被设置为30秒时,适用的场景为建筑制造。
本申请实施例中,上述的场景示例仅用于解释本申请实施例,不应构成限定。另外,上述不同抽帧间隔还可以用于其他场景的拍摄。
示例性的,对于步骤S110中场景识别模块识别到的拍摄场景为普通光场景,单帧处理时间为0.5秒。请参阅图8,图8是本申请实施例提供的一种人机交互界面示意图。如图8所示,则抽帧间隔调节控件213所能够设置的抽帧间隔的最小值大于或等于普通光场景下的单帧处理时间0.5秒。这样,可保证在抽帧间隔时间内可抽取到图片,减少由于抽帧间隔小于单帧处理时间产生抽帧失败的情况。
如图8所示,拍摄场景提示209,提示普通光场景。
S117、图像处理模块将视频数据流发送给预览显示模块。
S118、预览显示模块根据视频数据流进行预览显示。
预览显示的图片或视频可以是实时更新的。
步骤S119~S124,形成视频文件及预览过程。
S119、用户对拍摄控件202的触摸操作。
S120、响应于用户对拍摄控件202的触摸操作,拍摄控制模块向编码模块和HAL层发送通知。
其中,该通知用于使得编码模块从图像处理模块接收视频数据流并编码形成视频文件。
本申请实施例中,响应于用户对拍摄控件202的触摸操作,拍摄控制模块还可以向HAL层通知开始录制视频,HAL层中图像处理模块可将实时的视频数据流发送到编码模块。
S121、编码模块在接收到通知后,从图像处理模块接收视频数据流。
S122、编码模块对视频数据流进行编码,形成视频文件。
S123、图像处理模块将视频数据流发送给预览显示模块。
S124、预览显示模块根据视频数据流进行预览显示。
其中,预览显示的画面实时更新。
下面介绍形成视频文件及预览过程所涉及的用户界面。如图5中的(B)所示,拍摄控件202的状态为未开始拍摄状态。如图5中的(C)所示,响应于用户对拍摄控件202的触摸操作,拍摄控件202的状态由未开始拍摄状态变为正在拍摄状态。当拍摄控件202的状态是正在拍摄状态时,相机应用界面20上可显示用于提示视频录制时间的控件210,用于实时更新视频录制已持续的时间。
当拍摄控件202的状态是正在拍摄状态时,用户可再次对拍摄控件202执行触摸操作。则响应于该触摸操作,拍摄控件202的状态由正在拍摄状态变回未开始拍摄状态。电子设备100完成录制一次延时摄影模式下的视频,编码模块停止接收视频数据流,将已接收的视频数据流进行编码,形成视频文件。响应于用户对拍摄控件202的触摸操作,拍摄控件202还可向HAL层通知结束录制视频,HAL层中图像处理模块可停止向编码模块发送视频数据流。
下面介绍本申请实施例中,编码模块进行编码的具体过程。
本申请实施例中,编码模块可为多张图片按照顺序设置时间戳,使得这多张图片所组成的视频文件可按照设定的播放帧率,例如每秒20张图片放映。本申请实施例中,按照设定的播放帧率即为设置视频文件被播放时的帧间隔时间。
具体的,在录像方式下,电子设备对多张图片进行抽帧得到抽帧图片,抽帧图片通过设定的第一帧间隔时间编码得到视频文件,得到的视频文件被播放时的帧间隔时间为第一帧间隔时间。该第一帧间隔时间小于或等于第一时间间隔,即小于多张图片被采集的帧间隔时间。
在拍照方式下,电子设备通过设定的第二帧间隔时间编码得到视频文件,得到的视频文件被播放时的帧间隔时间为第二帧间隔时间。该第二帧间隔时间小于第二时间间隔,即小于多张图片被采集的帧间隔时间。
例如,设定时间戳的单位为1/8000秒,1秒钟按照时间戳单位对应8000。编码模块可按照前例中设定的播放帧率为每秒20张图片。那么编码模块可以得到相邻两张图片之间的时间差(即播放时的帧间隔时间),即是时间戳增量8000/20=400时间戳单位,即两张图片之间间隔1/20秒。编码模块可对顺序接收到的图片按照时间戳单位进行时间戳的赋值。具体的,编码模块接收到第一张图片,设置其时间戳为0。编码模块接收到第二张图片,设置其时间戳为400时间戳单位,以此类推得到视频文件。视频文件中每张图片都对应有时间戳。
当该视频文件被放映时,按照时间戳由小到大将视频文件中的图片依次进行显示。即 电子设备首先显示时间戳为0的图片,1/20秒之后显示时间戳为400的图片,依次类推即实现了视频文件的放映。
下面分别针对拍照方式和录像方式分别介绍形成视频文件及预览过程。本申请实施例中,在暗光场景下,拍摄控制模块确定拍摄方式为拍照方式。在普通光场景和逆光场景下,拍摄控制模块确定拍摄方式为录像方式。
请参阅图9,图9是本申请实施例提供的一种视频文件采集及预览过程的流程示意图。该视频文件及预览过程是拍照方式对应的流程,是在图3所描述实施例中步骤S118之后执行的,具体可以是在步骤S120之后执行的。例如在识别得到拍摄场景为暗光场景时,电子设备采用该拍照方式进行视频文件采集及预览。
其中,拍摄控制模块存储有单帧时间间隔,图像采集模块存储有曝光时间。单帧时间间隔和曝光时间的具体描述可参考图3所描述示例中步骤S112中的具体描述。其中,图3所描述示例中,步骤S121可具体展开为图9所示示例中S121a(包含S201~S207)和S121b(包含S210),步骤S122可包含S208,步骤S123可包含S207,步骤S124可包含S209。
S201、拍摄控制模块向图像采集模块发送拍摄请求(video request)。
其中,拍摄请求用于请求采集一张图片,采集的图片的曝光时间在HAL层中生成并存储。
S202、拍摄控制模块中通过定时器计时抽帧间隔。
其中,抽帧间隔可以是拍摄控制模块或者场景识别模块根据曝光时间确定的,曝光时间可以是场景识别模块根据曝光值确定,具体可参考图3所描述示例中步骤S112的描述。
相机应用可设置每张图片的抽帧间隔,在抽帧间隔内在HAL层完成图片的采集和处理。
其中,抽帧间隔可以是用户在相机应用界面上设置的,具体可参考图7和图8所描述示例。
S203、图像采集模块按照拍摄控制模块确定的曝光时间进行曝光,以采集图片。
曝光时间可以是场景识别模块根据曝光值确定,图像采集模块从场景识别模块获取的,具体可参考图3所描述示例中步骤S112的描述。
S204、图像采集模块将采集的图片发送给图像处理模块。
S205、图像采集模块向拍摄控制模块发送已拍摄一帧图片的通知。
在图像采集模块执行步骤S203之后,图像采集模块可向拍摄控制模块发送已拍摄一帧图片的通知。拍摄控制模块在接收到该已采集一帧图片的通知时,才会在定时结束后下发下一帧图片的拍摄请求,具体参考步骤S210的描述。
本申请实施例中,步骤S205还可以是在步骤S204之前执行的。
S206、图像处理模块使用暗光场景的视频后处理算法进行处理。
具体的,关于视频后处理算法的描述可参考概念部分的描述。
S207、图像处理模块将处理后的图片发送给编码模块,并发送给预览显示模块。
编码模块可接收多张图片,然后进行编码。
S208、编码模块对图片进行编码。
本申请实施例中,步骤S208不限于在步骤S209之前执行,还可以是在步骤S209之后 执行,本申请实施例对此不作限定。
S209、预览显示模块根据图片预览显示。
其中,预览显示模块可在抽帧间隔期间内均显示该图片,直到预览显示模块接收到下一帧图片。
S210、拍摄控制模块检测到在定时时间内接收到通知,则向图像采集模块发送下一帧图片的拍摄请求。
如果定时时间结束,拍摄控制模块仍未接收到已采集一帧图片的通知,拍摄控制模块需等到接收到该已采集一帧图片的通知时,才执行步骤S210。在一种可能的实现方式中,拍摄控制模块在等待超过设定时间(该设定时间大于定时时间)仍未接收到该已采集一帧图片的通知时,发送下一帧图像的拍摄请求。
HAL层根据接收到的下一帧图片的拍摄请求执行采集下一帧图片并进行预览的过程,具体可参考步骤S201~S210。
循环执行步骤S201~S210直至用户触摸拍摄控件结束录制,从而可得到多张图片。这多张图片可用于顺序排列进行编码得到视频文件。
请参阅图10,图10是本申请实施例提供的一种视频文件采集及预览过程的流程示意图。该视频文件及预览过程是录像方式对应的流程,是在图3所描述实施例中步骤S118之后执行的,可以是在步骤S120之后执行的。例如在识别得到拍摄场景为普通光场景或者逆光场景时,电子设备采用该录像方式进行视频文件采集及预览。
其中,图3所描述示例中,步骤S121可具体展开为图10所示示例中S121c(包含S301~S302)和S121d(包含S305~S308),步骤S122可包含S309,步骤S123可包含S303,步骤S124可包含S304。
S301、拍摄控制模块向图像采集模块发送拍摄请求。
其中,拍摄请求用于请求采集视频,采集视频的帧率可以是预设的,例如按照标准的帧率,即每秒钟采集24张图片。
S302、图像采集模块按照预设的帧率录像,得到视频。
在本申请一些实施例中,图像采集模块每采集视频中的一帧图片,即向图像采集模块上报已采集一帧图片的通知。
S303、图像采集模块将视频发送给预览显示模块。
S304、预览显示模块根据视频预览显示。
其中,图像采集模块可以是在采集完成一张图片之后,即将该图片发送给预览显示模块进行预览显示,以实现实时预览显示。
S305、图像采集模块对视频抽帧。
可选的,图像采集模块还可以是将视频发送给图像处理模块,图像处理模块对视频按照预设的抽帧间隔进行抽帧。
其中,抽帧间隔可以是用户在相机应用界面上设置的,具体可参考图7和图8所描述示例。
S306、图像采集模块将抽帧后的视频发送给图像处理模块。
S307、图像处理模块使用视频后处理算法进行对抽帧后的视频进行处理。
本申请实施例中,如果场景识别模块识别到到拍摄场景为普通光场景,则使用普通光场景对应的视频后处理算法进行处理。如果场景识别模块识别到到拍摄场景为逆光场景,则使用逆光场景对应的视频后处理算法进行处理,例如使用HDR算法。
S308、图像处理模块将处理后的视频发送给编码模块。
S309、编码模块对视频进行编码。
本申请实施例中,电子设备可根据采集的图片识别到拍摄场景从第一拍摄场景变为第二拍摄场景。电子设备可根据第二拍摄场景,确定第二拍摄参数,还可以确定第二拍摄方式,然后根据第二拍摄参数和/或第二拍摄方式采集多张图片,并将多张图片进行编码得到视频文件。具体的,在步骤S119~S124中进行视频录制过程中或者在启动相机应用未触摸拍摄控件(即未开始拍摄视频之前),场景识别模块还可以周期性的进行场景识别。例如,经过步骤S101~S124,图像采集模块根据拍摄参数和拍摄方式采集图片或者视频,以形成视频数据流。在未停止本次视频录制之前(即用户未触摸拍摄控件202以切换为未开始拍摄状态之前),场景识别模块可以周期性的进行场景识别,例如每隔10分钟进行一次场景识别。如果场景识别模块检测到拍摄场景发生变化,则向拍摄控制模块上报已变化的拍摄场景。电子设备100重新执行类似步骤S104~S118对应的拍摄参数和拍摄方式调整过程。
例如,第一拍摄场景为暗光场景,对应的第一拍摄参数为暗光场景的拍摄参数,第一拍摄方式为暗光的拍摄方式(拍照方式)。第二拍摄场景为普通光场景,对应的第二拍摄参数为普通光场景的拍摄参数,第二拍摄方式为普通光的拍摄方式(录像方式)。则经过步骤S101~S124,图像采集模块以暗光场景对应的拍摄参数和拍摄方式采集多张图片以形成视频数据流。在步骤S121之后,场景识别模块识别到拍摄场景从暗光场景变化为普通光场景时,场景识别模块向拍摄控制模块上报普通光场景。电子设备100重新执行类似步骤S104~S118对应的拍摄参数和拍摄方式调整过程,以将拍摄参数调整为普通光场景对应的拍摄参数,将拍摄方式调整为普通光场景对应的拍摄方式。另外,图像处理模块根据普通光场景调整视频后处理算法,以得到视频数据流。
上述的视频采集方法中,在采集过程中如果检测到拍摄场景发生变化,可以重新调整拍摄参数和拍摄方式,以提高所采集图片的质量,从而提高所采集视频的质量。
可以理解的,本申请实施例以延时摄影模式下进行视频拍摄为例进行介绍,但是本申请实施例不限于延时摄影模式,还可以是在其他录像模式下使用上述视频采集方法,本申请实施例对此不作限定。另外,本申请实施例所提供的识别拍摄场景以调整拍摄参数,也可以应用在各种拍照模式下,本申请实施例对此不作限定。
在上述实施例中,全部或部分功能可以通过软件、硬件、或者软件加硬件的组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用 计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (20)

  1. 一种视频采集方法,其特征在于,所述方法包括:
    电子设备显示相机应用界面,其中,所述相机应用界面上包括延时摄影模式图标;
    响应于作用在所述延时摄影模式图标的第一用户操作,所述电子设备采集至少一个图片;
    所述电子设备根据所述至少一个图片识别得到第一拍摄场景,所述第一拍摄场景包括逆光场景、普通光场景或者暗光场景;
    所述电子设备根据所述第一拍摄场景确定第一拍摄参数,所述第一拍摄参数与曝光量相关;
    所述电子设备根据所述第一拍摄参数采集多张图片;将所述多张图片进行编码得到视频文件,其中,所述视频文件被播放时的帧间隔时间小于或等于所述多张图片被采集的帧间隔时间。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备根据所述至少一个图片识别得到第一拍摄场景之后,所述方法还包括:
    所述电子设备根据所述第一拍摄场景确定第一拍摄方式,所述第一拍摄方式包括录像方式或者拍照方式;
    所述电子设备根据所述第一拍摄参数采集多张图片,包括:
    所述电子设备根据所述第一拍摄参数和所述第一拍摄方式采集多张图片。
  3. 根据权利要求2所述的方法,其特征在于,所述第一拍摄方式为所述录像方式时,所述多张图片被采集的帧间隔时间为第一时间间隔;
    所述电子设备将所述多张图片进行编码得到视频文件,包括:
    所述电子设备从所述多张图片中抽取图片得到抽帧图片,所述抽帧图片通过设定的第一帧间隔时间编码得到视频文件,所述视频文件的第一帧间隔时间小于或等于所述第一时间间隔。
  4. 根据权利要求3所述的方法,其特征在于,所述第一拍摄参数包括曝光时间,所述第一拍摄方式为所述拍照方式时,所述多张图片被采集的帧间隔时间为第二时间间隔;所述第二时间间隔大于所述第一时间间隔,且所述第二时间间隔由所述曝光时间确定;
    所述电子设备将所述多张图片进行编码得到视频文件,包括:
    所述电子设备通过设定的第二帧间隔时间编码得到=视频文件,所述第二帧间隔时间小于所述第二时间间隔。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    所述电子设备在延时摄影的界面上显示第一控件,所述第一控件用于在大于或等于所述曝光时间的取值范围内调节所述第二时间间隔,所述第一拍摄参数包括所述第二时间间 隔。
  6. 根据权利要求2至5任一项所述的方法,其特征在于,所述第一拍摄场景包括所述逆光场景或所述普通光场景时,所述第一拍摄方式为所述录像方式;
    所述第一拍摄场景包括所述暗光场景时,所述第一拍摄方式为所述拍照方式。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述电子设备采集至少一个图片并根据所述至少一个图片识别得到第一拍摄场景之后,所述方法还包括:
    所述电子设备根据所述第一拍摄场景,确定第一视频后处理算法,所述第一视频后处理算法与所述第一拍摄场景对应;
    所述电子设备将所述多张图片进行编码得到视频文件之前,所述方法还包括:
    所述电子设备使用所述第一视频后处理算法对所述多张图片进行处理得到处理后的多张图片;
    所述电子设备将所述多张图片进行编码得到视频文件,包括:
    所述电子设备将所述处理后的多张图片进行编码得到视频文件。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,所述相机应用界面还包含拍摄控件,所述电子设备根据所述第一拍摄参数采集多张图片,并将所述多张图片进行编码得到视频文件,包括:
    响应于作用在所述拍摄控件上的第二用户操作,所述电子设备根据所述第一拍摄参数采集多张图片,并将所述多张图片进行编码得到视频文件。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述电子设备采集至少一个图片并根据所述至少一个图片识别得到第一拍摄场景之后,所述方法还包括:
    所述电子设备根据采集的图片识别到拍摄场景从所述第一拍摄场景变为第二拍摄场景;
    所述电子设备根据所述第二拍摄场景,确定第二拍摄参数;
    所述电子设备根据所述第二拍摄参数采集多张图片,并将所述多张图片进行编码得到所述视频文件。
  10. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器、存储器和显示屏;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器用于调用所述计算机指令以使得所述电子设备执行:
    显示相机应用界面,其中,所述相机应用界面上包括延时摄影模式图标;
    响应于作用在所述延时摄影模式图标的第一用户操作,采集至少一个图片;
    根据所述至少一个图片识别得到第一拍摄场景,所述第一拍摄场景包括逆光场景、普通光场景或者暗光场景;
    根据所述第一拍摄场景确定第一拍摄参数,所述第一拍摄参数与曝光量相关;
    根据所述第一拍摄参数采集多张图片;将所述多张图片进行编码得到视频文件,其中,所述视频文件被播放时的帧间隔时间小于或等于所述多张图片被采集的帧间隔时间。
  11. 根据权利要求10所述的电子设备,其特征在于,所述一个或多个处理器,还用于调用所述计算机指令以使得所述电子设备执行:
    根据所述第一拍摄场景确定第一拍摄方式,所述第一拍摄方式包括录像方式或者拍照方式;
    所述一个或多个处理器,具体用于调用所述计算机指令以使得所述电子设备执行:
    根据所述第一拍摄参数和所述第一拍摄方式采集多张图片。
  12. 根据权利要求11所述的电子设备,其特征在于,所述第一拍摄方式为所述录像方式时,所述多张图片被采集的帧间隔时间为第一时间间隔;
    所述一个或多个处理器,具体用于调用所述计算机指令以使得所述电子设备执行:
    从所述多张图片中抽取图片得到抽帧图片,所述抽帧图片通过设定的第一帧间隔时间编码得到视频文件,所述视频文件的第一帧间隔时间小于或等于所述第一时间间隔。
  13. 根据权利要求12所述的电子设备,其特征在于,所述第一拍摄参数包括曝光时间,所述第一拍摄方式为所述拍照方式时,所述多张图片被采集的帧间隔时间为第二时间间隔;所述第二时间间隔大于所述第一时间间隔,且所述第二时间间隔由所述曝光时间确定;
    所述一个或多个处理器,具体用于调用所述计算机指令以使得所述电子设备执行:
    通过设定的第二帧间隔时间编码得到视频文件,所述第二帧间隔时间小于所述第二时间间隔。
  14. 根据权利要求13所述的电子设备,其特征在于,所述一个或多个处理器,还用于调用所述计算机指令以使得所述电子设备执行:
    在延时摄影的界面上显示第一控件,所述第一控件用于在大于或等于所述曝光时间的取值范围内调节所述第二时间间隔,所述第一拍摄参数包括所述第二时间间隔。
  15. 根据权利要求11至14任一项所述的电子设备,其特征在于,所述第一拍摄场景包括所述逆光场景或所述普通光场景时,所述第一拍摄方式为所述录像方式;
    所述第一拍摄场景包括所述暗光场景时,所述第一拍摄方式为所述拍照方式。
  16. 根据权利要求10至15任一项所述的电子设备,其特征在于,所述一个或多个处理器,还用于调用所述计算机指令以使得所述电子设备执行:
    根据所述第一拍摄场景,确定第一视频后处理算法,所述第一视频后处理算法与所述第一拍摄场景对应;
    使用所述第一视频后处理算法对所述多张图片进行处理得到处理后的多张图片;
    所述一个或多个处理器,具体用于调用所述计算机指令以使得所述电子设备执行:
    将所述处理后的多张图片进行编码,得到视频文件。
  17. 权利要求10至16任一项所述的电子设备,其特征在于,所述相机应用界面还包含拍摄控件,所述一个或多个处理器,具体用于调用所述计算机指令以使得所述电子设备执行:
    响应于作用在所述拍摄控件上的第二用户操作,根据所述第一拍摄参数采集多张图片,并将所述多张图片进行编码得到视频文件。
  18. 根据权利要求10至17任一项所述的电子设备,其特征在于,所述一个或多个处理器,还用于调用所述计算机指令以使得所述电子设备执行:
    根据采集的图片识别到拍摄场景从所述第一拍摄场景变为第二拍摄场景;
    根据所述第二拍摄场景,确定第二拍摄参数;
    根据所述第二拍摄参数采集多张图片,并将所述多张图片进行编码得到所述视频文件。
  19. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1至9中任一项所述的方法。
  20. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1至9中任一项所述的方法。
PCT/CN2020/115109 2019-09-18 2020-09-14 视频采集方法和电子设备 WO2021052292A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910883504.5 2019-09-18
CN201910883504.5A CN112532859B (zh) 2019-09-18 2019-09-18 视频采集方法和电子设备

Publications (1)

Publication Number Publication Date
WO2021052292A1 true WO2021052292A1 (zh) 2021-03-25

Family

ID=74884487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115109 WO2021052292A1 (zh) 2019-09-18 2020-09-14 视频采集方法和电子设备

Country Status (2)

Country Link
CN (1) CN112532859B (zh)
WO (1) WO2021052292A1 (zh)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542591A (zh) * 2021-06-02 2021-10-22 惠州Tcl移动通信有限公司 缩时摄影处理方法、装置、移动终端及存储介质
CN113781388A (zh) * 2021-07-20 2021-12-10 许继集团有限公司 基于图像增强的输电线路通道隐患图像识别方法及装置
CN114051095A (zh) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 视频流数据的远程处理方法以及拍摄***
CN115086567A (zh) * 2021-09-28 2022-09-20 荣耀终端有限公司 延时摄影方法和装置
CN115278078A (zh) * 2022-07-27 2022-11-01 深圳市天和荣科技有限公司 一种拍摄方法、终端及拍摄***
CN115484423A (zh) * 2021-06-16 2022-12-16 荣耀终端有限公司 一种转场特效添加方法及电子设备
WO2023010912A1 (zh) * 2021-07-31 2023-02-09 荣耀终端有限公司 一种图像处理方法及电子设备
WO2023045963A1 (zh) * 2021-09-23 2023-03-30 北京字跳网络技术有限公司 一种视频生成方法、装置、设备及存储介质
CN116055738A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 视频压缩方法及电子设备
CN116055897A (zh) * 2022-08-25 2023-05-02 荣耀终端有限公司 拍照方法及其相关设备
CN116347224A (zh) * 2022-10-31 2023-06-27 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片***及可读存储介质
CN116668580A (zh) * 2022-10-26 2023-08-29 荣耀终端有限公司 场景识别的方法、电子设备及可读存储介质
CN116668866A (zh) * 2022-11-21 2023-08-29 荣耀终端有限公司 一种图像处理方法和电子设备
CN116708753A (zh) * 2022-12-19 2023-09-05 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
CN116723417A (zh) * 2022-02-28 2023-09-08 荣耀终端有限公司 一种图像处理方法和电子设备
CN117119291A (zh) * 2023-02-06 2023-11-24 荣耀终端有限公司 一种出图模式切换方法和电子设备
CN117135299A (zh) * 2023-04-27 2023-11-28 荣耀终端有限公司 视频录制方法和电子设备
CN117692762A (zh) * 2023-06-21 2024-03-12 荣耀终端有限公司 拍摄方法及电子设备
CN117714837A (zh) * 2023-08-31 2024-03-15 荣耀终端有限公司 一种相机参数配置方法及电子设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992859A (zh) * 2021-12-27 2022-01-28 云丁网络技术(北京)有限公司 一种画质提升方法和装置
CN113810596B (zh) * 2021-07-27 2023-01-31 荣耀终端有限公司 延时摄影方法和装置
CN113705584A (zh) * 2021-08-24 2021-11-26 上海名图软件有限公司 物品差异光变检测***、检测方法及其应用
CN116546313A (zh) * 2022-01-25 2023-08-04 华为技术有限公司 一种复原拍摄的方法及电子设备
CN116723382B (zh) * 2022-02-28 2024-05-03 荣耀终端有限公司 一种拍摄方法及相关设备
CN116701288A (zh) * 2022-02-28 2023-09-05 荣耀终端有限公司 流媒体特性架构、处理方法、电子设备及可读存储介质
CN114827342B (zh) * 2022-03-15 2023-06-06 荣耀终端有限公司 视频处理方法、电子设备及可读介质
CN115802144B (zh) * 2023-01-04 2023-09-05 荣耀终端有限公司 视频拍摄方法及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349060A (zh) * 2013-08-06 2015-02-11 卡西欧计算机株式会社 图像处理装置以及图像处理方法
JP2017134182A (ja) * 2016-01-26 2017-08-03 キヤノン株式会社 撮像装置、その制御方法とプログラム
US20180167545A1 (en) * 2016-12-13 2018-06-14 Canon Kabushiki Kaisha Image capturing apparatus, control method therefor, and storage medium
CN108270966A (zh) * 2017-12-27 2018-07-10 努比亚技术有限公司 一种调整补光亮度的方法、移动终端及存储介质
CN109743508A (zh) * 2019-01-08 2019-05-10 深圳市阿力为科技有限公司 一种延时摄影装置及方法
CN110012210A (zh) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 拍照方法、装置、存储介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247098B2 (en) * 2013-04-09 2016-01-26 Here Global B.V. Automatic time lapse capture
CN103841323A (zh) * 2014-02-20 2014-06-04 小米科技有限责任公司 配置拍摄参数的方法、装置和终端设备
CN105100632B (zh) * 2014-05-13 2018-07-27 北京展讯高科通信技术有限公司 成像设备自动曝光的调整方法及装置、成像设备
CN104079835A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星云视频的方法和装置
US10771712B2 (en) * 2017-09-25 2020-09-08 Gopro, Inc. Optimized exposure temporal smoothing for time-lapse mode
CN110086985B (zh) * 2019-03-25 2021-03-30 华为技术有限公司 一种延时摄影的录制方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349060A (zh) * 2013-08-06 2015-02-11 卡西欧计算机株式会社 图像处理装置以及图像处理方法
JP2017134182A (ja) * 2016-01-26 2017-08-03 キヤノン株式会社 撮像装置、その制御方法とプログラム
US20180167545A1 (en) * 2016-12-13 2018-06-14 Canon Kabushiki Kaisha Image capturing apparatus, control method therefor, and storage medium
CN108270966A (zh) * 2017-12-27 2018-07-10 努比亚技术有限公司 一种调整补光亮度的方法、移动终端及存储介质
CN110012210A (zh) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 拍照方法、装置、存储介质及电子设备
CN109743508A (zh) * 2019-01-08 2019-05-10 深圳市阿力为科技有限公司 一种延时摄影装置及方法

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542591A (zh) * 2021-06-02 2021-10-22 惠州Tcl移动通信有限公司 缩时摄影处理方法、装置、移动终端及存储介质
CN115484423A (zh) * 2021-06-16 2022-12-16 荣耀终端有限公司 一种转场特效添加方法及电子设备
CN113781388A (zh) * 2021-07-20 2021-12-10 许继集团有限公司 基于图像增强的输电线路通道隐患图像识别方法及装置
WO2023010912A1 (zh) * 2021-07-31 2023-02-09 荣耀终端有限公司 一种图像处理方法及电子设备
WO2023045963A1 (zh) * 2021-09-23 2023-03-30 北京字跳网络技术有限公司 一种视频生成方法、装置、设备及存储介质
CN115086567A (zh) * 2021-09-28 2022-09-20 荣耀终端有限公司 延时摄影方法和装置
CN114051095A (zh) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 视频流数据的远程处理方法以及拍摄***
CN116723417A (zh) * 2022-02-28 2023-09-08 荣耀终端有限公司 一种图像处理方法和电子设备
CN116723417B (zh) * 2022-02-28 2024-04-26 荣耀终端有限公司 一种图像处理方法和电子设备
CN116055738A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 视频压缩方法及电子设备
CN116055738B (zh) * 2022-05-30 2023-10-20 荣耀终端有限公司 视频压缩方法及电子设备
CN115278078A (zh) * 2022-07-27 2022-11-01 深圳市天和荣科技有限公司 一种拍摄方法、终端及拍摄***
CN116055897A (zh) * 2022-08-25 2023-05-02 荣耀终端有限公司 拍照方法及其相关设备
CN116055897B (zh) * 2022-08-25 2024-02-27 荣耀终端有限公司 拍照方法及其相关设备
CN116668580A (zh) * 2022-10-26 2023-08-29 荣耀终端有限公司 场景识别的方法、电子设备及可读存储介质
CN116668580B (zh) * 2022-10-26 2024-04-19 荣耀终端有限公司 场景识别的方法、电子设备及可读存储介质
CN116347224A (zh) * 2022-10-31 2023-06-27 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片***及可读存储介质
CN116347224B (zh) * 2022-10-31 2023-11-21 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片***及可读存储介质
CN116668866A (zh) * 2022-11-21 2023-08-29 荣耀终端有限公司 一种图像处理方法和电子设备
CN116668866B (zh) * 2022-11-21 2024-04-19 荣耀终端有限公司 一种图像处理方法和电子设备
CN116708753B (zh) * 2022-12-19 2024-04-12 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
CN116708753A (zh) * 2022-12-19 2023-09-05 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
CN117119291A (zh) * 2023-02-06 2023-11-24 荣耀终端有限公司 一种出图模式切换方法和电子设备
CN117135299A (zh) * 2023-04-27 2023-11-28 荣耀终端有限公司 视频录制方法和电子设备
CN117692762A (zh) * 2023-06-21 2024-03-12 荣耀终端有限公司 拍摄方法及电子设备
CN117714837A (zh) * 2023-08-31 2024-03-15 荣耀终端有限公司 一种相机参数配置方法及电子设备

Also Published As

Publication number Publication date
CN112532859B (zh) 2022-05-31
CN112532859A (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2021052292A1 (zh) 视频采集方法和电子设备
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
WO2020168956A1 (zh) 一种拍摄月亮的方法和电子设备
CN110072070B (zh) 一种多路录像方法及设备、介质
CN112492193B (zh) 一种回调流的处理方法及设备
CN113475057A (zh) 一种录像帧率的控制方法及相关装置
CN113891009B (zh) 曝光调整方法及相关设备
WO2023273323A1 (zh) 一种对焦方法和电子设备
CN113630558B (zh) 一种摄像曝光方法及电子设备
WO2023241209A1 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
CN113572948B (zh) 视频处理方法和视频处理装置
CN113971076A (zh) 任务处理方法及相关装置
CN115567630A (zh) 一种电子设备的管理方法、电子设备及可读存储介质
CN114863494A (zh) 屏幕亮度的调整方法、装置和终端设备
CN112188094B (zh) 图像处理方法及装置、计算机可读介质及终端设备
CN113852755A (zh) 拍摄方法、设备、计算机可读存储介质及程序产品
WO2021052388A1 (zh) 一种视频通信方法及视频通信装置
CN113923372B (zh) 曝光调整方法及相关设备
WO2021017518A1 (zh) 电子设备及图像处理方法
CN112422814A (zh) 一种拍摄方法和电子设备
CN117119314B (zh) 一种图像处理方法及相关电子设备
CN113672454B (zh) 冻屏监控方法、电子设备及计算机可读存储介质
WO2023124178A1 (zh) 显示预览图像的方法、装置及可读存储介质
WO2023020420A1 (zh) 音量显示方法、电子设备及存储介质
US20240236504A9 (en) Point light source image detection method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20866079

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20866079

Country of ref document: EP

Kind code of ref document: A1