CN116582748A - Electronic device, method and device for processing image data - Google Patents

Electronic device, method and device for processing image data Download PDF

Info

Publication number
CN116582748A
CN116582748A CN202210111705.5A CN202210111705A CN116582748A CN 116582748 A CN116582748 A CN 116582748A CN 202210111705 A CN202210111705 A CN 202210111705A CN 116582748 A CN116582748 A CN 116582748A
Authority
CN
China
Prior art keywords
camera
image data
image processing
algorithm
data collected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210111705.5A
Other languages
Chinese (zh)
Inventor
朱文波
李洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210111705.5A priority Critical patent/CN116582748A/en
Publication of CN116582748A publication Critical patent/CN116582748A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An electronic device, a method and an apparatus for processing image data are provided. The electronic device includes: the first camera and the second camera correspond to different focal sections; the front-end image processing module is configured to simultaneously receive image data collected by the first camera and the second camera in the process of switching the first camera and the second camera, and perform image processing on the image data collected by the first camera and/or the second camera by utilizing an image processing algorithm, wherein the image processing algorithm comprises a first algorithm, and at least part of the image data in the image data collected by the first camera and the second camera is not processed by the first algorithm; and the back-end image processing module is configured to receive the image data acquired by the first camera and the second camera from the front-end image processing module and perform space alignment processing. The electronic equipment reduces the data volume processed by the front-end image processing module in the camera switching process, so that the requirement of space alignment processing on bandwidth is reduced.

Description

Electronic device, method and device for processing image data
Technical Field
The disclosure relates to the technical field of shooting, in particular to electronic equipment, and a method and a device for processing image data.
Background
In order to meet shooting requirements of users, a plurality of cameras corresponding to different focal segments are usually arranged in electronic equipment (such as a mobile phone). When it is desired to switch from one camera to another, in order to achieve smooth zooming, to prevent abrupt picture changes, spatial alignment (spatial alignment transform, SAT) techniques may be used.
However, the spatial alignment technique requires processing of image data acquired by at least two cameras simultaneously. Therefore, the spatial alignment process places high demands on the bandwidth of the electronic device.
Disclosure of Invention
In view of this, the present disclosure provides an electronic device, a method and an apparatus for processing image data, where the electronic device may reduce the bandwidth requirement of spatial alignment processing in the process of switching cameras corresponding to different focal segments.
In a first aspect, there is provided an electronic device comprising: the first camera and the second camera correspond to different focal segments; the front-end image processing module is configured to receive the image data collected by the first camera and the image data collected by the second camera simultaneously in the process of switching the first camera and the second camera, and perform image processing on the image data collected by the first camera and/or the image data collected by the second camera by utilizing an image processing algorithm, wherein the image processing algorithm comprises a first algorithm, and at least part of the image data collected by the first camera and the image data collected by the second camera are not processed by the first algorithm; the rear-end image processing module is configured to receive the image data collected by the first camera and the image data collected by the second camera from the front-end image processing module, and perform space alignment processing on the image data collected by the first camera and the image data collected by the second camera.
Optionally, in some embodiments, the first camera is a master camera, and the image data collected by the first camera is processed using all of the image processing algorithms.
Optionally, in some embodiments, the second camera is a secondary camera, and image data acquired by the second camera is not processed by the image processing algorithm.
Optionally, in some embodiments, at least a portion of the image data not processed by the first algorithm comprises: the algorithm module corresponding to the first algorithm is closed to at least part of image data; or at least part of the data flow corresponding to the data bypasses the algorithm module corresponding to the first algorithm.
Optionally, in some embodiments, the back-end image processing module is an application processor, and the front-end image processing module is an image signal processor connected to the application processor.
Optionally, in some embodiments, the configuration of the bandwidth of the front-end image processing module may not support image processing of both the image data acquired by the first camera and the image data acquired by the second camera using all of the image processing algorithms.
In a second aspect, there is provided a method of processing image data, comprising: in the process of switching the first camera and the second camera, the front-end image processing module receives image data collected by the first camera and image data collected by the second camera at the same time, and performs image processing on the image data collected by the first camera and/or the image data collected by the second camera by utilizing an image processing algorithm, wherein the image processing algorithm comprises a first algorithm, and at least part of the image data collected by the first camera and the image data collected by the second camera are not processed by the first algorithm; the rear-end image processing module receives the image data collected by the first camera and the image data collected by the second camera from the front-end image processing module, and performs space alignment processing on the image data collected by the first camera and the image data collected by the second camera.
Optionally, in some embodiments, the first camera is a master camera, and the image data collected by the first camera is processed using all of the image processing algorithms.
Optionally, in some embodiments, the second camera is a secondary camera, and image data acquired by the second camera is not processed by the image processing algorithm.
Optionally, in some embodiments, at least a portion of the image data not processed by the first algorithm comprises: the algorithm module corresponding to the first algorithm is closed to at least part of image data; or at least part of the data flow corresponding to the data bypasses the algorithm module corresponding to the first algorithm.
Optionally, in some embodiments, the back-end image processing module is an application processor, and the front-end image processing module is an image signal processor connected to the application processor.
Optionally, in some embodiments, the configuration of the bandwidth of the front-end image processing module may not support image processing of both the image data acquired by the first camera and the image data acquired by the second camera using all of the image processing algorithms.
In a third aspect, there is provided an apparatus for processing image data, comprising a memory having executable code stored therein and a processor configured to execute the executable code to implement the method of the second aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon executable code which when executed is capable of carrying out the method of the second aspect.
In a fifth aspect, a computer program product is provided comprising executable code which, when executed, is capable of implementing the method according to the second aspect.
In the embodiment of the application, in the process of switching the cameras of the electronic equipment corresponding to different focal segments, the data volume of processing the image data acquired by the two cameras by the front-end image processing module of the electronic equipment is reduced. Therefore, the scheme provided by the application effectively reduces the requirement of space alignment processing on the bandwidth of the electronic equipment in the camera switching process.
Drawings
Fig. 1 is an operation screen of an electronic device camera provided in the related art.
Fig. 2 is a photograph taken using a tele lens of an electronic device in the related art.
Fig. 3 is a schematic structural view of an electronic device provided in the related art.
Fig. 4 is a schematic diagram of a spatial alignment process flow of the electronic device of fig. 3.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Fig. 7 is a schematic diagram of an image processing flow of the electronic device shown in fig. 6.
Fig. 8 is a flowchart of a method for processing image data according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments.
The electronic device can meet shooting requirements of users by providing a plurality of cameras. The electronic device may comprise a terminal device or a chip. The terminal device may include a smart phone, tablet, notebook, palm top, personal digital assistant (personal digital assistant, PDA), portable media player (portable media player, PMP), navigation device, wearable device, digital camera, video camera, etc.
Cameras may also be referred to as lenses. An electronic device that provides multiple cameras may also be referred to as a multi-camera electronic device. The application does not limit the number and types of cameras that can be provided by the multi-camera electronic device.
Different types of cameras may correspond to different focal segments. The focal segment is the dividing range of the focal length of the camera. Different electronic devices may use different focal segment divisions.
A multi-camera electronic device may typically provide 3 to 5 cameras corresponding to different focal segments. Among them, cameras that are often used in electronic devices include wide-angle cameras, ultra-wide-angle cameras, tele cameras, and the like.
Typically, a camera operation page of the multi-camera electronic device will have a zoom magnification switching selection of 0.6x,1x,2x or 5x,10 x. The zoom magnification may also be referred to as zoom value. The smaller the number corresponding to the zoom magnification, the wider the number, and the farther the number is. Wherein 0.6x approximately corresponds to the focal segment of the ultra-wide angle camera of the electronic device, 1x approximately corresponds to the main focal segment of the electronic device, and 2x, 5x or 10x approximately corresponds to the focal segment of the tele camera of the electronic device.
Several cameras commonly used for electronic devices are briefly described below.
Wide-angle camera
The main camera (or primary camera) of most electronic devices is typically a wide-angle camera equivalent to around 28 mm. Since this focal length is close to the viewing range of the "what the human eye sees", what is being photographed. The main camera is usually the most frequently used camera.
Fig. 1 is an operation screen diagram of an electronic device camera provided in the related art. As shown in fig. 1, 1x represents a 1-fold zoom magnification. The zoom magnification may also be referred to as zoom magnification. The zoom magnification is located in a focal segment of the wide-angle camera.
When shooting with 1x zoom magnification, the image quality is best. The zoom magnification is suitable for shooting figures, buildings, scenery, disciplines and the like.
Super wide angle camera
The ultra-wide angle camera can provide a wider field of view than the wide angle camera. There may be differences in zoom magnification displayed when using ultra-wide angle cameras on different electronic devices.
Compared with a wide-angle camera, the ultra-wide-angle camera can shoot pictures which are wider and have impact force. Therefore, the ultra-wide angle camera is more suitable for shooting wind and light and building.
The ultra-wide angle camera has an ultra-large view finding range. A wider screen can be obtained at the same position as compared with a wide-angle camera. Therefore, the photos taken by using the ultra-wide angle camera can bring great convenience for later re-cutting.
When the super wide angle camera is used for shooting landscapes, more landscape elements can be put into the mirror, and the sense of depth and the sense of space of pictures can be reflected as long as the picture is formed and applied, so that the whole landscape photo looks like a vigor pound.
In addition, ultra-wide angle cameras inherently have the characteristic of lens distortion. That is, the ultra-wide angle camera can lengthen and enlarge things at the edges of the photograph. When shooting, deformation effect of the ultra-wide angle camera can be fully utilized for upward shooting, so that visual impact of 'near large and far small' appears. When shooting a building, the shot building is more magnificent by utilizing the lens distortion characteristic of the ultra-wide angle camera.
Long-focus camera
For electronic devices, zoom factors above 1x may be referred to as tele. The higher the factor in front of x is, the farther it can be taken.
The zoom multiple set when different electronic devices switch cameras is different. For example, some electronic devices switch from a wide-angle camera to a tele camera at a zoom magnification of 5 x.
The tele camera can take higher quality pictures from a distance. The use of a tele camera to capture objects at a distance or objects in an enlarged screen does not cause degradation in image quality like digital zooming.
In the case of inconvenient walking, it is difficult to grasp the subject by the main camera (typically, a wide-angle camera) of the electronic apparatus alone in order to view a view in a cluttered building. At the moment, the long-focus camera of the electronic equipment is used for shooting, so that the flat and unoccupied photo is more layered.
Fig. 2 is a photograph taken using a tele lens of an electronic device in the related art. As can be seen from fig. 2, the tele camera can "zoom in" the distance between the background and the foreground, thereby bringing a sense of spatial compression, and making the whole picture more full. The 'compressive sense' is one of the characteristics of the long-focus camera.
The long-focus camera is small in deformation and weak in perspective effect, and can be used for shortening the distance between the foreground and the background and enhancing the relation between the foreground and the background. Therefore, the long-focus camera is used for shooting, and some unique visual special effects can be created.
By utilizing the characteristic of the tele camera, the attention of the audience can be directed to the main body in the longitudinal depth of the picture by taking the scenery extending straight like a road, a railing and the like as a guide line.
In summary, although the multi-shot electronic device cannot guarantee that better photos are taken, each lens has respective characteristics, and can provide diversified shooting experiences for users.
The user may change the zoom magnification to zoom in or out of the photographed picture during photographing or previewing. As different cameras are adapted to different focal segments. Accordingly, the electronic device may need to switch cameras to complete shooting or preview during the course of changing the zoom magnification.
As can be seen from the foregoing, different cameras (e.g., main camera, tele camera, etc.) have different characteristics. When different cameras shoot the same picture, the shot pictures are different. For example, the size of the photographed picture may be different. As another example, the photographed picture may have a rotation relationship.
If the camera is switched by a hard switching mode, the displayed picture has transition or clamping. Hard switching may refer to switching between cameras, stopping an original camera (e.g., a wide-angle camera) (or stream off) and then starting a target camera (e.g., a tele camera).
How to ensure the smoothness of lens switching (for example, no cutoff occurs) and reduce the jump of images (for example, ensure the consistency of the image effects before and after switching) in the zooming process becomes a problem that the current multi-camera electronic equipment needs to be considered in the process of camera switching.
To provide a better shooting experience, a spatial alignment technique may be used when the multi-camera electronic device switches between different cameras. Spatial alignment may also be referred to as smooth zooming, spatial alignment conversion, or SAT.
In the process of switching cameras corresponding to different focal segments, the picture mutation can be caused due to factors such as dislocation of the optical axes of the cameras, rotation of the sensor and the like. And the spatial alignment can optimize the picture display in the switching process of the camera through a software algorithm, so that the pictures are smoothly switched without abrupt change.
Fig. 3 is a schematic structural view of an electronic device provided in the related art. As shown in fig. 3, the electronic device 300 includes three cameras and a flash 304. The three cameras are a telephoto camera 301, a main camera (wide-angle camera) 302, and an ultra-wide-angle camera 303, respectively.
The tele camera 301 of the electronic device can use a periscope tele camera with 8MP pixels, 125mm equivalent focal length and F3.4 aperture. The main camera 302 may use a super-sensitive camera with a pixel 40MP, an equivalent focal length of 27mm, and an aperture F1.6. The ultra-wide-angle camera 303 can use an ultra-wide-angle camera with a pixel 20MP, an equivalent focal length of 16mm and an aperture F2.2. The electronic device can continuously zoom in a focal length between 16-27mm and 27-125 mm.
Fig. 4 is a schematic diagram of a spatial alignment process flow of the electronic device of fig. 3. The black-marked camera in fig. 4 indicates the camera in use.
As shown in fig. 4, the electronic device photographs using the main camera 302 when the zoom magnification is between 1x-2 x. When the zoom multiple is 3x, the electronic apparatus simultaneously turns on the tele camera 301 and the main camera 302. When the zoom multiple is 5x, the camera used for shooting by the electronic device is switched from the main camera 302 to the tele camera 301.
The spatial alignment process flow shown in fig. 4 uses a fusion scheme of lens relay zooming in combination with software to realize continuous smooth zooming of the electronic device between 16mm and 125 mm.
In the process of camera switching, if multiple camera data streams can be received at a high frame rate and a high resolution simultaneously (e.g., image data of multiple cameras can be received simultaneously), and all of the image processing algorithms can be used to process the multiple data streams, then multiple real-time full-size high frame rate data streams can be utilized for spatial alignment processing. Under the condition, the smooth transition of the display effect can be ensured when the camera is switched.
However, processing image data acquired by multiple cameras simultaneously has a high requirement on bandwidth (e.g., DDR bandwidth). This is because algorithms that process image data may need to access DDR in processing the image data acquired by the camera. For example, some algorithm that processes image data may require reading data from a DDR. As another example, when the algorithmic processing is complete, it may be necessary to write the processed image data to the DDR.
For a processing module (e.g., a chip), the greater the bandwidth of the DDR, the greater the power consumption and the higher the cost. Therefore, for the front-end image processing module of the electronic device, bandwidth is generally used with caution due to the limitation of power consumption. Therefore, the bandwidth provided by the front-end image processing module may not be sufficient to process the data streams of both cameras simultaneously.
In addition, when the data volume of the data stream provided by the camera is very large (for example, shooting 4k or 8k high-definition video), the bandwidth provided by the front-end image processing module may not be enough to process the data streams of two paths of cameras at the same time.
When the front-end image processing module cannot support the instantaneous high-bandwidth requirement of multiple data streams, the electronic device is usually mainly cut directly when the camera is switched. The cut-out will cause a picture break or mutation, affecting the user experience.
Therefore, how to reduce the bandwidth requirement of the space alignment process on the electronic device so as to realize smooth switching between cameras of the electronic device corresponding to different focal segments becomes a current urgent problem to be solved.
In view of this, the present application provides an electronic device. The electronic device provided by the application is described in detail below with reference to fig. 5 to 7.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 500 may include a first camera 511, a second camera 512, a front-end image processing module 520, and a back-end image processing module 530. Each module is described separately below.
The first camera 511 and the second camera 512 may correspond to different focal segments. For example, the first camera 511 may correspond to the length Jiao Jiaoduan, i.e., a tele camera may be used. The second camera 512 may correspond to a wide-angle focal segment, i.e., to a wide-angle camera or main camera. As another example, first camera 511 may correspond to a wide-angle focal segment and second camera 512 may correspond to a length Jiao Jiaoduan. For another example, the first camera 511 may correspond to a wide-angle focal segment, and the second camera 512 may correspond to a super-wide-angle focal segment, i.e., a super-wide-angle camera may be used.
It should be understood that fig. 5 only shows 2 cameras, but embodiments of the present application are not limited to the specific types and numbers of cameras that electronic device 500 may include. The first camera 511 and the second camera 512 may be any type or focal length of camera.
In some embodiments, the camera may acquire image data using a sensor (or sensor).
In some embodiments, images are output primarily by first camera 511 before and during the switching of cameras used by electronic device 500 from first camera 511 to second camera 512. At this time, the first camera 511 may be referred to as a main camera, and the second camera 512 may be referred to as a sub camera.
After the switching is completed, the electronic device 500 mainly outputs an image from the second camera 512, and at this time, the second camera 512 may be referred to as a main camera, and the first camera 511 may be referred to as a sub camera.
The front-end image processing module 520 may be configured to process image data acquired by the camera. The front-end image processing module 520 may refer to an image signal processor (image signal processor, ISP). The image signal processor may be integrated with the application processor (application processor, AP) or may be independent of the application processor. The image signal processor independent of the application processor may also be referred to as pre-ISP.
The front-end image processing module 520 may be configured to receive image data collected by the first camera 511 and image data collected by the second camera 512 simultaneously during the switching process of the first camera 511 and the second camera 512.
The front-end image processing module 520 may be used to perform image processing algorithms. The image processing algorithm may comprise a plurality of algorithms. The algorithm may include, for example, a denoising algorithm, AF focus parameter calculation, and the like. Each algorithm may be implemented by software or by a hardware module, and the present application is not limited to a specific algorithm included in the image processing algorithm and a specific implementation form of each algorithm.
The front-end image processing module 520 may perform image processing on the image data collected by the first camera 511 and/or the image data collected by the second camera 512 using an image processing algorithm.
In some embodiments, the front-end image processing module 520 may select an image processing algorithm for processing image data collected by the camera according to a setting of photographing software of the electronic device or scene information of the current photographing, etc.
In some embodiments, the image processing algorithm may include a first algorithm. When the front-end image processing module 520 performs image processing on the image data collected by the first camera 511 and the image data collected by the second camera 512, at least part of the image data collected by the first camera 511 and the image data collected by the second camera 512 are not processed by the first algorithm.
It should be understood that the first algorithm is used only to make the description of the present application more clear and is not intended to limit the specific algorithm used or the number of algorithms used by the image processing algorithm.
The front-end image processing module 520 may perform one or more image processing algorithms for performing image processing on the image data acquired by the first camera 511 and the image data acquired by the second camera 512. The first algorithm may refer to one or more of image processing algorithms that image-process image data collected by the first camera 511, and/or image processing collected by the second camera 512.
At least some of the image data collected by the first camera 511 and the image data collected by the second camera 512 are not processed by the first algorithm, that is, the front-end image processing module 520 does not process all of the image data collected by the first camera 511 and the image data collected by the second camera 512 by all of the image processing algorithms.
In some embodiments, front-end image processing module 520 may process all of the image data acquired by first camera 511 and all of the image data acquired by second camera 512. During image processing of the entire image data, the front-end image processing module 520 may shut down some of the image processing algorithms.
For example, the image processing algorithms may include a first algorithm, a second algorithm, and a third algorithm. The front-end image processing module 520 may turn off the first algorithm and the second algorithm when processing the image data collected by the first camera 511 and the image data collected by the second camera 512. At this time, the image data collected by the first camera 511 and the image data collected by the second camera 512 are processed only by the third algorithm.
As another example, the image processing algorithms may include a first algorithm, a second algorithm, and a third algorithm. The front-end image processing module 520 may turn off all algorithms while processing the image data collected by the first camera 511 and the image data collected by the second camera 512. That is, the image data collected by the first camera 511 and the image data collected by the second camera 512 may not be processed by any image processing algorithm. The method of directly closing all algorithms is simple to realize, and the efficiency of subsequent space alignment processing can be improved.
For another example, when the front-end image processing module 520 processes the image data collected by the first camera 511 and the image data collected by the second camera 512, the first camera 511 is the main camera (i.e. mainly displays the image data collected by the first camera 512), all algorithms in the image processing algorithm can be used to process the image data collected by the first camera 511, so as to ensure the display effect of the picture in the spatial alignment processing process.
For another example, when the front-end image processing module 520 processes the image data collected by the first camera 511 and the image data collected by the second camera 512, and the second camera 512 is a secondary camera, all algorithms in the image processing algorithms for processing the image data collected by the second camera 512 can be closed, so that the switching process control is simpler.
For another example, when the front-end image processing module 520 processes the image data collected by the first camera 511 and the image data collected by the second camera 512, a combination of the two modes can be used, so that implementation is simple, and the display effect of the image does not change obviously.
In other embodiments, front-end image processing module 520 may process all of the image data acquired by first camera 511 and all of the image data acquired by second camera 512. In doing so, the front-end image processing module 520 may bypass some of the image processing algorithms.
For example, the image processing algorithm may include a first algorithm and a second algorithm. The front-end image processing module 520 may select a processing flow of the image data in the front-end image processing module 520 when processing the image data acquired by the first camera 511 and the image data acquired by the second camera 512, so that the image data does not undergo some algorithms. For example, the image data may be left unprocessed by the first algorithm and/or the second algorithm.
In other embodiments, front-end image processing module 520 may process portions of the image data acquired by first camera 511 and/or portions of the image data acquired by second camera 512 using an image processing algorithm.
For example, the front-end image processing module 520 may process all of the image data acquired by the first camera 511 using an image processing algorithm, while processing only a portion of the image data acquired by the second camera 512.
As another example, front-end image processing module 520 may process all of the image data acquired by first camera 511 using an image processing algorithm without processing the image data acquired by second camera 512.
As another example, front-end image processing module 520 may process partial image data acquired by first camera 511 and partial image data acquired by the second camera using an image processing algorithm. For example, the image data collected by the first camera 511 and the image data collected by the second camera 512 each include portrait data. The front-end image processing module 520 may process only the portrait data included in the image data collected by the first camera 511 and the second camera 512 using an image processing algorithm to ensure the photographing effect of the portrait.
The front-end image processing module 520 may select a processing mode for the image data collected by the first camera 511 and the image data collected by the second camera 512 according to any combination of the above modes.
The front-end image processing module 520 does not limit a specific image processing manner of the image data collected by the first camera 511 and the image data collected by the second camera 512. As long as the image data is processed, some data in the image data collected by the first camera 511 and the image data collected by the second camera 512 are not processed by all algorithms of the image processing algorithm, that is, fall within the scope of the present application.
In the actual processing, the image processing rule for the image data collected by the first camera 511 and the image data collected by the second camera 512 may be selected according to the photographed scene information (for example, lighting conditions), photographing modes (for example, portrait mode, landscape mode, etc.), bandwidths provided by the electronic device 500, features of different algorithms, and the like.
For example, when the light of the environment is relatively bright, the effect of Noise Reduction (NR) may not be obvious, and at this time, it may be selected not to perform noise reduction processing on the image data. Thereby reducing the bandwidth requirement of the image processing process without significantly reducing the image display effect.
As another example, for some algorithms that do not have data content (e.g., calculation of AF focus parameters), only a single image data component may be processed.
In some embodiments, the same processing strategy may be used for the image data acquired by the first camera 511 and the second camera 512 to reduce the amount of data as a whole. Meanwhile, under the condition of limited bandwidth, the data volume provided by the two cameras is enough to support the processing of a spatial alignment algorithm.
In some embodiments, the configuration of the bandwidth of front-end image processing module 520 may not support image processing of both the image data acquired by first camera 511 and the image data acquired by second camera 512 using all of the image processing algorithms.
At this time, by making at least part of the image data acquired by the first camera 511 and the image data acquired by the second camera 512 not processed by a part of the image processing algorithms (for example, the first algorithm), the bandwidth requirement of the front-end image processing module 520 for the spatial alignment process can be reduced. Therefore, the scheme provided by the application can be suitable for the scene with limited bandwidth.
The electronic device 500 may also include a back-end image processing module 530. The back-end image processing module 530 may be configured to further process the image data acquired by the camera. The application is not limited to a specific implementation of the back-end image processing module 530.
In some embodiments, the back-end image processing module 530 may refer to an application processor.
In some embodiments, the front-end image processing module 520 may be one of the back-end image processing modules 530. In other embodiments, the front-end image processing module 520 may be a module that is independent of the back-end image processing module 530.
The front-end image processing module 520 may interact with the back-end image processing module 530 in a variety of ways to complete the data transfer. For example, the front-end image processing module 520 may interact with the back-end image processing module 530 through a data interface. The data interface may refer to, for example, a mobile industry processor interface (mobile industry processor interface, MIPI).
The back-end image processing module 530 may be configured to receive image data acquired by the first camera 511 and image data acquired by the second camera 512 from the front-end image processing module 520.
The image data collected by the first camera 511 and the image data collected by the second camera 512 received by the back-end image processing module 530 may refer to image data processed by an image processing algorithm or data not processed by the image processing algorithm.
For example, when the front-end image processing module 520 selects to directly shut down all of the image processing algorithms for processing the image data, the image data received by the back-end image processing module 530 is image data that has not been processed by the image processing algorithms.
The back-end image processing module 530 may perform spatial alignment processing on the image data acquired by the first camera 511 and the image data acquired by the second camera 512 after receiving the image data from the front-end image processing module 520.
The spatial alignment process may make the image data collected by the first camera 511 and the image data collected by the second camera 512 more similar in display effect, so as to implement smooth zooming during the camera switching process, and prevent picture interruption or abrupt change.
In some embodiments, after the back-end image processing module 530 completes the spatial alignment process, the front-end image processing module 520 may stop image processing on the image data acquired by the first camera 511. Stopping the image processing of the image data acquired by the first camera 511 may be achieved in various ways. For example, the first camera 511 may be turned off directly or the reception of the image data collected by the first camera 511 may be stopped.
In some embodiments, the flow of image data acquired by the camera, as presented on the electronic device via various processes, may be referred to as pipeline. The pipeline can be used to determine the path through which the image data acquired by the camera needs to flow. For example, pipeline can be used to control the data collected by the camera to be provided to the electronic device for display after noise reduction, white balance, spatial alignment and the like.
The control parameters for the pipeline may be issued in the form of a request by the back-end image processing module 530 to the front-end image processing module 520. The request may include parameters captured by the camera. For example, an aperture value, an exposure value, a size of a picture, and the like used for photographing may be included.
In some embodiments, the back-end image processing module 530 may issue control parameters for the pipeline along with control parameters for the front-end image processing module 520 to the front-end image processing module 520.
For example, the back-end image processing module 530 may instruct the front-end image processing module 520 to change the processing policy for the image data collected by the cameras while receiving the image data collected by the second camera 512 (e.g., after selecting a portion of the image processing algorithms to shut down, image processing is performed on the image data collected by the first camera 511 and the image data collected by the second camera 512 using the remaining algorithms).
In some embodiments, the back-end image processing module 530 may instruct the front-end image processing module 520 to stop receiving the image data collected by the first camera 511 after the spatial alignment process is completed, and at the same time, start all the image processing algorithms for performing image processing on the image data collected by the second camera 512, so as to ensure the shooting quality.
Control is synchronized with the change in the data stream by sending the control parameters required by the front-end image processing module 520 along with the control parameters for pipeline to the front-end image processing module 520 for processing. Therefore, on the premise of meeting the bandwidth limit, the real-time performance of the processing process can be improved, and the influence of camera switching on the shooting effect can be reduced.
In other embodiments, the front-end image processing module 520 may actively determine a processing strategy for the image data collected by the first camera 511 and the image data collected by the second camera 512 according to the current scene or parameter information.
For example, when the front-end image processing module 520 monitors that the zoom magnification of the current shooting meets the condition of lens switching, the secondary camera is started to start receiving the image data collected by the two paths of cameras, and meanwhile, the processing strategy of the image data is changed, so that the requirement of the image processing process on bandwidth is reduced.
As another example, when the front-end image processing module 520 monitors that there are two data streams, the processing policy for the image data is changed.
Changing the processing policy may refer to the front-end image processing module 520 controlling at least part of the image data collected by the first camera 511 and the image data collected by the second camera 512 not to be processed by a first algorithm of the image processing algorithms.
For another example, when the front-end image processing module 520 monitors only one path of data stream, the processing strategy of the data stream is not changed, that is, all algorithms in the image processing algorithm are used for processing the one path of image data, so as to ensure shooting quality.
By changing the processing strategy in time (e.g., turning off or on certain ones of the image processing algorithms through which the image data flows in time), the delay in turning off or on the algorithm module can be reduced on the basis of ensuring switching.
Specifically, the front-end image processing module 520 may be caused to shut down or bypass some of the image processing algorithms that process image data only when image data acquired by two cameras is received simultaneously.
The front-end image processing module 520 actively controls the processing flow of the image data according to the scene change information, so that the real-time switching performance can be better ensured, and the influence of the change of the strategy for processing the image data on the user experience is minimized.
The solution provided by the present application is described below in connection with fig. 6 and 7 and a specific embodiment.
Fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the present application. As shown in fig. 6, the electronic device 600 may include three cameras (a camera 611, a camera 612, and a camera 613), a front-end image processing module 620, and a back-end image processing module 630. The front-end image processing module 620 and the back-end image processing module 640 may be connected through the MIPI interface 630.
The front-end image processing module 620 may include a control module 621 and a processing module 622. The processing module 622 may be used to process data of the primary camera and data of the secondary camera. The primary camera and the secondary camera may refer to any two of three cameras of the electronic device. The main camera is a camera corresponding to image data mainly used by a currently output picture. The other camera may be referred to as a secondary camera.
The back-end image processing module 640 may include a spatial alignment algorithm module 641 and an upper layer image processing module 642. The spatial alignment algorithm module 641 may perform spatial alignment processing according to the image data of the main camera and the sub camera provided by the front-end image processing module 620 to achieve smooth zooming.
Fig. 7 is a schematic diagram of an image processing flow of the electronic device shown in fig. 6.
As shown in fig. 7, in step S701, the camera is turned on.
In step S702, image preview or recording is started.
In step S703, a zoom operation. For example, the zoom magnification may be switched in 0.6x,1x,2x,5x,20x, or the like, or the zoom operation may be performed within a zoom magnification range supported by the electronic apparatus 600.
In steps S704 to S705, the sub camera is triggered to be started. Meanwhile, the processing of partial image data in the image data collected by the main camera and the auxiliary camera is performed by an image processing algorithm for processing the image data collected by the main camera and the auxiliary camera by the front-end image processing module 620 is closed.
The triggering condition for the sub-camera to be turned on may be a zoom magnification. For example, the current primary camera is camera 611 and the secondary camera to be turned on is camera 612. When the zoom magnification is 5X, the camera 612 is turned on.
In step S706, the sub camera performs data stream output at the target frame rate and resolution. And meanwhile, processing the image data acquired by the main camera and the auxiliary camera according to a given algorithm control strategy.
For example, an image processing algorithm that performs image processing on image data acquired by the sub-camera may be turned off. And for the image data collected by the main camera, all algorithms in the image processing algorithm are still used for image processing.
All algorithms for processing the image data collected by the auxiliary cameras are directly closed, so that the control process is simpler and easier to realize on the premise that the display effect of the shot or previewed image is not reduced as much as possible.
In step S707, a spatial alignment algorithm is initialized, and spatial alignment processing is performed using the image data of the main and sub-shot.
In step S708, the camera is switched. That is, the current main and sub cameras are switched. For example, the current primary camera is camera 611 and the secondary camera is camera 612. After the switching is completed, the main camera becomes the camera 612, and the sub-camera becomes the camera 611.
In steps S709 and S710, the sub-camera is turned off while preparing to turn on all of the image processing algorithms of the front-end image processing module 620 for processing the image data so as to continue the normal image data processing flow after the switching is completed.
The moment for starting all algorithms is prepared, and the time can be triggered when the multiplying power is changed to a certain degree, so that the instantaneity of the processing process is improved. After the space alignment processing is finished, the data stream can be imported into an originally set algorithm module for processing so as to ensure the image processing effect after switching.
In step S711, preview or photographing is continued.
In some embodiments, the change to the processing policy of the image data may be made only when the primary and secondary cameras are on at the same time. Thus, after the switching is completed, the image data still has a better display effect.
For example, for the noise reduction module (or NR module), when the zoom magnification is changed to a certain extent, a command to turn off the module and a command to turn on the sub camera may be transmitted to the front-end image processing module 620 together. The front-end image processing module 620, after receiving the instruction, adjusts the data stream so that the associated image data is no longer processed by the noise reduction module.
When the main-sub switching is completed, an on command of the algorithm module (noise reduction module) may be issued to the front-end image processing module 620 together with an off command of the data stream. Since only the primary camera generally continues to output data streams after the switch is completed, the secondary camera is turned off. I.e. only one data stream has to be processed at this time. Therefore, no bandwidth limitation exists, and the data stream can be processed again by the noise reduction module so as to ensure the effect of the image.
The scheme provided by the application can determine the selection and control of the front-end image processing module to the image processing algorithm according to the scene information of the AP side and the dynamic identification of the zoom magnification (namely, the parameters capable of reflecting the single-double follow-up working state) on the electronic equipment or platform with limited bandwidth.
The working states of different algorithms in the front-end image processing module are adjusted in real time according to scene information changes, and the requirement of the back-end space alignment processing algorithm for acquiring double-shot data at the same time can be met under the limit of low bandwidth.
Furthermore, since the solution provided by the present application does not require more bandwidth design under the same effect. Therefore, the scheme provided by the application reduces the influence on the performance and the power consumption of the electronic equipment.
The device embodiments of the present application have been described above with reference to fig. 1 to 7, and the solution embodiments provided by the present application are described below with reference to fig. 8. It should be understood that the method embodiment and the device embodiment provided by the application correspond to each other. Therefore, reference is made to the description of the embodiment of the apparatus, where nothing is done in the embodiment of the method.
Fig. 8 is a flowchart of a method for processing image data according to an embodiment of the present application.
As shown in fig. 8, in step S810, in the process of switching between the first camera and the second camera, the front-end image processing module receives the image data collected by the first camera and the image data collected by the second camera at the same time, and performs image processing on the image data collected by the first camera and/or the image data collected by the second camera by using an image processing algorithm. The image processing algorithm comprises a first algorithm, and at least part of image data in the image data collected by the first camera and the image data collected by the second camera are not processed by the first algorithm.
In step S820, the back-end image processing module receives the image data collected by the first camera and the image data collected by the second camera from the front-end image processing module, and performs spatial alignment processing on the image data collected by the first camera and the image data collected by the second camera.
Optionally, in some embodiments, the first camera is a master camera, and the image data collected by the first camera is processed using all of the image processing algorithms.
Optionally, in some embodiments, the second camera is a secondary camera, and image data acquired by the second camera is not processed by the image processing algorithm.
Optionally, in some embodiments, at least a portion of the image data not processed by the first algorithm comprises: the algorithm module corresponding to the first algorithm is closed to at least part of image data; or at least part of the data flow corresponding to the data bypasses the algorithm module corresponding to the first algorithm.
Optionally, in some embodiments, the back-end image processing module is an application processor, and the front-end image processing module is an image signal processor connected to the application processor.
Optionally, in some embodiments, the configuration of the bandwidth of the front-end image processing module may not support image processing of both the image data acquired by the first camera and the image data acquired by the second camera using all of the image processing algorithms.
Fig. 9 is a schematic structural diagram of an electronic device according to another embodiment of the present application. The electronic device 900 shown in fig. 9 may be any electronic device having an image processing function. For example, the electronic device 900 may be a mobile terminal. The electronic device 900 may include a memory 910 and a processor 920. Memory 910 may be used to store executable code. The processor 920 may be used to execute executable code stored in the memory 910 to implement the steps in the various methods described previously. In some embodiments, the electronic device 900 may also include a network interface 930, and data exchange of the processor 920 with external devices may be achieved through the network interface 930.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present disclosure, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital video disc (Digital Video Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. An electronic device, comprising:
the camera comprises a first camera and a second camera, wherein the first camera and the second camera correspond to different focal sections;
The front-end image processing module is configured to simultaneously receive image data collected by the first camera and image data collected by the second camera in the process of switching the first camera and the second camera, and perform image processing on the image data collected by the first camera and/or the image data collected by the second camera by using an image processing algorithm, wherein the image processing algorithm comprises a first algorithm, and at least part of the image data collected by the first camera and the image data collected by the second camera are not processed by the first algorithm;
the rear-end image processing module is configured to receive the image data collected by the first camera and the image data collected by the second camera from the front-end image processing module, and perform space alignment processing on the image data collected by the first camera and the image data collected by the second camera.
2. The electronic device of claim 1, wherein the first camera is a master camera and the image data collected by the first camera is processed using all of the image processing algorithms.
3. The electronic device of claim 1, wherein the second camera is a secondary camera, and wherein image data collected by the second camera is not processed by the image processing algorithm.
4. The electronic device of claim 1, wherein the at least a portion of the image data not processed by the first algorithm comprises:
the algorithm module corresponding to the first algorithm is closed to the at least partial image data; or alternatively, the process may be performed,
and the data flow corresponding to at least part of the data bypasses the algorithm module corresponding to the first algorithm.
5. The electronic device of claim 1, wherein the back-end image processing module is an application processor and the front-end image processing module is an image signal processor coupled to the application processor.
6. The electronic device of claim 1, wherein the configuration of the bandwidth of the front-end image processing module is not capable of supporting image processing of both the image data acquired by the first camera and the image data acquired by the second camera using all of the image processing algorithms.
7. A method of processing image data, comprising:
In the process of switching a first camera and a second camera, a front-end image processing module receives image data collected by the first camera and image data collected by the second camera at the same time, and performs image processing on the image data collected by the first camera and/or the image data collected by the second camera by using an image processing algorithm, wherein the image processing algorithm comprises a first algorithm, and at least part of the image data collected by the first camera and the image data collected by the second camera are not processed by the first algorithm;
the rear-end image processing module receives the image data collected by the first camera and the image data collected by the second camera from the front-end image processing module, and performs space alignment processing on the image data collected by the first camera and the image data collected by the second camera.
8. The method of claim 7, wherein the first camera is a master camera and the image data collected by the first camera is processed using all of the image processing algorithms.
9. The method of claim 7, wherein the second camera is a secondary camera, and wherein image data acquired by the second camera is not processed by the image processing algorithm.
10. The method of claim 7, wherein the at least a portion of the image data not being processed by the first algorithm comprises:
the algorithm module corresponding to the first algorithm is closed to the at least partial image data; or alternatively, the process may be performed,
and the data flow corresponding to at least part of the data bypasses the algorithm module corresponding to the first algorithm.
11. The method of claim 7, wherein the back-end image processing module is an application processor and the front-end image processing module is an image signal processor coupled to the application processor.
12. The method of claim 7, wherein the configuration of the bandwidth of the front-end image processing module is incapable of supporting image processing of both the image data acquired by the first camera and the image data acquired by the second camera using all of the image processing algorithms.
13. An apparatus for processing image data, comprising a memory having executable code stored therein and a processor configured to execute the executable code to implement the method of any of claims 7-12.
CN202210111705.5A 2022-01-29 2022-01-29 Electronic device, method and device for processing image data Pending CN116582748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210111705.5A CN116582748A (en) 2022-01-29 2022-01-29 Electronic device, method and device for processing image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210111705.5A CN116582748A (en) 2022-01-29 2022-01-29 Electronic device, method and device for processing image data

Publications (1)

Publication Number Publication Date
CN116582748A true CN116582748A (en) 2023-08-11

Family

ID=87532791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210111705.5A Pending CN116582748A (en) 2022-01-29 2022-01-29 Electronic device, method and device for processing image data

Country Status (1)

Country Link
CN (1) CN116582748A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676325A (en) * 2023-10-27 2024-03-08 荣耀终端有限公司 Control method and related device in multi-shot scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676325A (en) * 2023-10-27 2024-03-08 荣耀终端有限公司 Control method and related device in multi-shot scene

Similar Documents

Publication Publication Date Title
US9204039B2 (en) Image processing method and apparatus
EP2523450B1 (en) Handheld electronic device with dual image capturing method and computer program product
US8823837B2 (en) Zoom control method and apparatus, and digital photographing apparatus
CN110505411B (en) Image shooting method and device, storage medium and electronic equipment
CN110933294B (en) Image processing method, terminal and computer storage medium
CN103986867A (en) Image shooting terminal and image shooting method
CN109923850B (en) Image capturing device and method
CN111641777A (en) Image processing method, image processing apparatus, image processor, electronic device, and storage medium
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN112532808A (en) Image processing method and device and electronic equipment
CN111726528B (en) Camera switching method, device, terminal and computer storage medium
WO2023231869A1 (en) Photographing method and apparatus
EP2760197B1 (en) Apparatus and method for processing image in mobile terminal having camera
CN116582748A (en) Electronic device, method and device for processing image data
JP2024504159A (en) Photography methods, equipment, electronic equipment and readable storage media
CN112601028B (en) Image pickup control method and device, computer equipment and storage medium
CN117692771A (en) Focusing method and related device
CN112839166A (en) Shooting method and device and electronic equipment
CN113837937B (en) Multimedia processing chip, electronic equipment image fusion method and image clipping method
CN117354625A (en) Image processing method and device, electronic equipment and readable storage medium
CN113840075B (en) Electronic equipment and image fusion method based on electronic equipment
CN210297875U (en) Camera device for mobile terminal and mobile terminal
RU2807091C1 (en) Image merger method and electronic device
CN109842740A (en) Panoramic camera, image processing system and image processing method
CN115696039A (en) Data processing method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination