WO2023005298A1 - 基于多摄像头的图像内容屏蔽方法和装置 - Google Patents

基于多摄像头的图像内容屏蔽方法和装置 Download PDF

Info

Publication number
WO2023005298A1
WO2023005298A1 PCT/CN2022/089274 CN2022089274W WO2023005298A1 WO 2023005298 A1 WO2023005298 A1 WO 2023005298A1 CN 2022089274 W CN2022089274 W CN 2022089274W WO 2023005298 A1 WO2023005298 A1 WO 2023005298A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
area
shooting
camera
Prior art date
Application number
PCT/CN2022/089274
Other languages
English (en)
French (fr)
Inventor
李宗原
魏芅
张作超
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023005298A1 publication Critical patent/WO2023005298A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Definitions

  • the present application relates to the field of terminals, in particular to a method and device for masking image content based on multiple cameras.
  • the selfie stick effectively solves the problem of limited composition when taking selfies, especially in outdoor sports, multi-person selfies or full-body selfies.
  • using a selfie stick to take pictures also creates new problems: the selfie stick appears in selfie photos, which affects user experience.
  • the present application provides a method and device for masking image content based on multiple cameras.
  • the method can be applied to electronic devices capable of shooting, such as mobile phones and tablet computers.
  • electronic devices capable of shooting, such as mobile phones and tablet computers.
  • electronic devices such as mobile phones and tablet computers can splice a group of images with visual difference collected by two or more cameras, so as to obtain images with specific image content shielded, so as to improve the user's shooting experience.
  • the present application provides a method for masking image content, the method is applied to an electronic device equipped with a camera, and the method includes: acquiring a group of image frames, a group of image frames at least including a first image, a second image , the first image and the second image are respectively collected by the first camera and the second camera of the electronic device at the same time, and the first camera and the second camera are both the front camera or the rear camera of the electronic device; using the second image Replace the first object in the first image with the content of the image in to obtain a third image; display the third image.
  • the electronic device can simultaneously use multiple front cameras or rear cameras to collect a group of image frames with visual difference.
  • the images captured by multiple front cameras or rear cameras at the same time naturally have parallax.
  • Using the images caused by parallax to fill in the content to be blocked can preserve the image features blocked by the content to be blocked as much as possible and improve the shielding effect.
  • using parallax image splicing to mask certain image content can reduce the amount of calculation and improve processing efficiency.
  • using the image content in the second image to replace the first object in the first image to obtain the third image specifically includes: using the image content in the first region in the second image
  • the third image is obtained by replacing the image content of the second area in the first image
  • the second area is an area displaying the first object in the first image
  • the first area corresponds to the second area.
  • the first area corresponds to the second area, including: the first area and the second area have the same size and the same position; or, the first area is larger than the second area, and the second area
  • the central position of the first area is the same as the central position of the second area, and the first area covers the second area.
  • the first area may be exactly the same as the area corresponding to the first object, that is, the same as the second area, so that the electronic device may use the image content of the first area to fill the first object to be masked.
  • the first area can also be larger than the second area. In this way, the first area can not only fill the first object in the second area, but also avoid edge jaggedness caused by incomplete shielding, and improve the shielding effect.
  • the method before acquiring a group of image frames, the method further includes: detecting a first user operation.
  • the electronic device can detect a user's operation, and in response to the operation, the electronic device can execute the image masking algorithm provided in this application. In this way, the electronic device can execute the image masking algorithm according to the needs of the user, thereby improving the application pertinence of the image masking function and reducing invalid calculations of the electronic device.
  • the first user operation is: an image capture operation.
  • the electronic device may implement the image content shielding method provided in the present application after detecting the user's shooting operation.
  • the image capturing operation includes: an image capturing operation detected by the electronic device, and/or, an image capturing operation detected by the second device; an image capturing operation detected by the electronic device Including: the operation acting on the shooting control, and, the operation acting on the button, the shooting control is displayed on the user interface provided by the electronic device, and the second device is connected to the first device.
  • the imaging of the second device in the first image is the first object.
  • displaying the third image specifically includes: saving the third image in response to the first user operation; detecting the second user operation, displaying the third image or the third image thumbnail.
  • the electronic device in response to the first user operation, can save the third image, then, the electronic device can detect the second user operation, and in response to the operation, the electronic device can display the third image.
  • the second user operation includes: an operation of displaying the third image in the gallery application, or an operation of invoking the third image by a third-party application program.
  • the electronic device in the scene of browsing the gallery application program, can display the third image; in the process of calling the image stored in the gallery and sending it to other electronic devices, the electronic device can display the third image .
  • the first user operation is an operation of opening a shooting preview interface.
  • the electronic device may implement the image content shielding method provided in the present application after detecting the user's operation of invoking the camera for shooting.
  • the operation of opening the shooting preview interface includes: the operation of entering the shooting preview interface from the first application program icon, and the first application program is an application program provided by the electronic device; or, from The first control provided by the third-party application program enters the operation of the shooting preview interface.
  • the electronic device may implement the image content shielding method provided in the present application in a scenario where it is detected that the user opens the camera application.
  • the electronic device can implement the image content shielding method provided by the present application.
  • the method further includes: displaying a shooting preview interface, the shooting preview interface includes a preview window, and the preview window is used to display the camera of the electronic device in real time
  • the collected image displaying the third image specifically includes: displaying the third image in a preview window.
  • the electronic device can display the masked image in the preview window when displaying the shooting preview interface. In this way, the user can see the shooting effect after masking processing before shooting.
  • the second device is a selfie stick
  • the first object is the selfie stick in the image
  • the electronic device can shield the selfie stick appearing in the image in the preview window or in the captured photo or video, thereby improving the user's shooting experience.
  • the selfie stick includes a stick body and a clamping part, and the first object is the selfie stick in the image, specifically including: the first object is the stick body of the selfie stick in the image.
  • the present application provides an electronic device, which includes one or more processors and one or more memories; wherein, one or more memories are coupled with one or more processors, and one or more
  • the memory is used to store computer program codes.
  • the computer program codes include computer instructions.
  • the electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect.
  • the present application provides a computer-readable storage medium, including instructions.
  • the above-mentioned instructions When the above-mentioned instructions are run on an electronic device, the above-mentioned electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect. method.
  • the present application provides a computer program product containing instructions.
  • the above-mentioned computer program product is run on an electronic device, the above-mentioned electronic device is executed as described in the first aspect and any possible implementation manner of the first aspect. method.
  • the above-mentioned electronic device provided in the second aspect, the computer storage medium provided in the third aspect, and the computer program product provided in the fourth aspect are all used to execute the method provided in the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
  • FIG. 1 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 2 is a group of images with visual difference provided by the embodiment of the present application.
  • 3A-3C are a set of user interfaces provided by the embodiment of the present application.
  • FIG. 4 is a flow chart of masking image content provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an electronic device provided by an embodiment of the present application acquiring a set of images with visual difference
  • FIG. 6A is a schematic diagram of a selfie stick in an electronic device recognition image provided by an embodiment of the present application.
  • FIG. 6B and FIG. 6C are structural diagrams of encoders and decoders provided by the embodiments of the present application.
  • FIG. 7 is a schematic diagram of masking image content provided by an embodiment of the present application.
  • FIG. 8 is a hardware structural diagram of an electronic device provided by an embodiment of the present application.
  • depth of field cameras depth of field, DOF
  • TOF Time of flight
  • other cameras with deep ability Deep ability
  • images containing depth information can already be captured.
  • the camera with the above-mentioned deep-sensing capability can be called a deep-sense camera (Deep sense camera).
  • Images captured by the depth-sensing camera may include depth information of objects in the image.
  • the image processing module can generate a 3D model of the selfie stick according to the above depth information, and then determine the area of the selfie stick in the selfie image. Then, the image processing module can shield the area of the selfie stick, so that the selfie stick does not appear in the image obtained by the user, thereby improving the shooting experience of the user.
  • the method of shielding the selfie stick with a depth-sensing camera requires a large computational cost.
  • the calculation amount of electronic equipment will increase significantly.
  • the time cost of calculation is also significantly increased. For users, long computing time greatly affects user experience.
  • mobile phone A can stitch the above multiple frames of images with parallax, and fuse the above multiple frames of images with parallax into an image that shields the selfie stick .
  • the above fused image may be referred to as an output image.
  • mobile phone A After obtaining the output image, mobile phone A can store the image for further operation by the user.
  • the above operations include editing, browsing, and so on.
  • the mobile phone A when the mobile phone A has more cameras, the mobile phone A can acquire more image frames (at the same moment), and there is a visual difference between any two of the above image frames. In this way, mobile phone A can fuse more of the above images, thereby improving the shielding effect.
  • cam1 and cam2 are both front-facing cameras
  • more cameras here refer to more front-facing cameras
  • more image frames refer to images collected by the above-mentioned more front-facing cameras.
  • cam1 and cam2 are both rear cameras
  • more cameras here refer to more rear cameras
  • image frames refer to images collected by the above-mentioned more rear cameras.
  • the electronic device 100 also includes tablet computers, smart watches, smart bracelets, sports cameras, augmented reality (augmented reality, AR) devices, virtual reality (virtual reality, VR) devices and other portable handheld devices capable of shooting.
  • augmented reality augmented reality, AR
  • virtual reality virtual reality
  • Exemplary embodiments of the electronic device 100 include, but are not limited to Portable electronic devices with Linux or other operating systems.
  • 3A-3C exemplarily show user interfaces in a group of camera-taking scenarios.
  • a scenario of implementing the method for masking image content based on multiple cameras provided by the embodiment of the present application is introduced.
  • the shooting scene is a selfie scene
  • the image content that the user desires to mask is a selfie stick.
  • the above-mentioned user operation of invoking the camera to display the shooting preview interface includes: an operation acting on the camera application icon of the electronic device 100, and an operation acting on the shooting control provided by the third-party application.
  • the shooting controls provided by the above-mentioned third-party applications are, for example, Provided photo control, video call control and so on.
  • the preview window 311 can be used to display the images collected by the camera in real time.
  • the image displayed on the preview window 311 may be referred to as an original image.
  • the original image has also undergone some basic processing of the image processing module, such as color temperature calibration, linear correction, noise removal, dead point removal, interpolation, automatic exposure control, etc.
  • the above-mentioned basic processing operations are all necessary processing for converting optical signals into electrical signals and imaging them on electronic devices such as mobile phones. Therefore, in the embodiment of the present application, the image that undergoes the above basic image processing and is displayed in the preview window 311 may be referred to as an original image.
  • the mode bar 312 can be used to select a shooting mode.
  • the shooting modes mentioned above include: night view module, portrait mode, camera mode, video recording mode, professional mode and so on.
  • the electronic device 100 is in the photographing mode.
  • the shooting button 313 can be used to receive a user's operation of taking a photo.
  • the electronic device 100 may determine that the image frame displayed in the preview window 311 at this time is a captured image.
  • the user interface 31 may also include other controls, for example, a plurality of controls provided in the area 315 for the user to adjust the shooting picture, a switch button 316 and the like.
  • the switching button 316 can be used to change the camera (front camera or rear camera) of the electronic device 100 . It can be understood that the user interface 31 shown in FIG. 3A is a possible user interface, and should not be construed as a limitation to this embodiment of the present application.
  • the electronic device 100 may display the user interface 32 shown in FIG. 3B .
  • the user operations for controlling the shooting of the electronic device 100 include: the operation acting on the shooting button 313 in FIG. 3A , and the operation on the shooting button on the selfie stick operate.
  • the above-mentioned control of the electronic device 100 to shoot also includes a shooting operation on a physical button of the electronic device 100 and the like.
  • the image preview window 314 may display thumbnails of captured pictures generated in response to the user's shooting operation at the previous moment. Then, the electronic device 100 may detect a user operation on the image preview window 314 . In response to this operation, the electronic device 100 may display the user interface 33 shown in FIG. 3C.
  • the user interface 33 is a user interface on the electronic device 100 for displaying captured photos.
  • User interface 33 includes area 331 .
  • Area 331 can be used to display images taken by the user, including single-frame images and multi-frame images.
  • a single-frame image is, for example, a photo; and a multi-frame image includes a video, a GIF, and the like.
  • the user interface 33 may also include other controls, such as a control 332, a control 333, and so on.
  • cell phone A may display more pictures from the album.
  • the mobile phone A may display detailed information of the image in the area 331 .
  • the above detailed information includes, for example, shooting time, shooting location, image size (storage space usage), resolution, and storage path, etc. This embodiment of the present application does not limit this.
  • the multi-camera-based image content masking method provided in the embodiment of the present application can also be applied to the scene of shooting video.
  • the generated video file does not include the selfie stick. In this way, the user can directly obtain the selfie video with the selfie blocked.
  • S101 The electronic device 100 detects that the user is using a camera to take pictures.
  • a working front camera may indicate a selfie scene, but not necessarily a selfie stick selfie scene. Therefore, further, the electronic device 100 may also detect whether a selfie stick is connected. When detecting that the selfie stick is connected, the electronic device 100 may confirm that the user is taking a selfie with the selfie stick.
  • the electronic device 100 is provided with an external interface; the selfie stick includes a button and a connection line for controlling taking pictures.
  • the above external interfaces include 3.5mm headphone jack, Type-C interface, Lighting interface and so on. This embodiment of the present application does not limit this.
  • the connecting wire of the selfie stick can be connected with the above-mentioned external interface. Then, the user can control the shooting task of the electronic device 100 through the shooting button on the selfie stick.
  • the electronic device 100 can determine that the user is using the selfie stick to take a selfie. Referring to FIG. 3A , when the electronic device 100 displays the user interface 31 shown in FIG. 3A , the electronic device 100 may determine that the user is taking a selfie.
  • the electronic device 100 may also implement the multi-camera-based image content masking method provided in the embodiment of the present application.
  • the user can choose to remove other people who happen to appear in the viewfinder. That is to say, the shooting scene is not limited to the selfie scene, but also can be a normal shooting scene using the rear camera.
  • the shielded object is not limited to the selfie stick, and may also be other preset image content that can be recognized by the electronic device 100 . The embodiment of the present application does not limit this.
  • the electronic device 100 may enable the selfie stick shielding method provided in the embodiment of the present application. That is to say, it is optional to implement the multi-camera selfie stick shielding method provided in the embodiment of the present application.
  • the electronic device 100 can implement the above-mentioned method in any shooting scene to identify and shield the selfie stick in the image. In other embodiments, the electronic device 100 may further refine the shooting scene. After identifying the preset detailed scene, the electronic device 100 implements the above method to identify and block the selfie stick in the image. For example, it is determined in S101 that the user is in the scene of taking a selfie, or further, it is determined that the user is in the scene of taking a selfie with a selfie stick, and so on.
  • the electronic device 100 can acquire a group of image frames with visual difference captured by different cameras at the same moment.
  • FIG. 5 exemplarily shows a schematic diagram of capturing images by cam1 and cam2 to generate an image frame stream.
  • stream1 may represent the image frame stream collected and generated by cam1;
  • stream2 may represent the image frame stream collected and generated by cam2.
  • stream1 and stream2 respectively include a plurality of consecutive image frames, and the plurality of image frames in stream1 and stream2 are aligned in time sequence.
  • time sequence alignment means that any image frame in stream1 has the same time stamp as the image frame at the corresponding position in stream2. That is, the image frames at the same position in stream1 and stream2 are respectively collected by cam1 and cam2 at the same time.
  • stream1 includes image frames T1, T2, T3, T4, and T5; stream2 includes image frames S1, S2, S3, S4, and S5.
  • T1 and S1 are images collected by cam1 and cam2 at the same time respectively.
  • the relationship between T2, T3, T4, T5 and S2, S3, S4, S5 can refer to the introduction of T1 and S1 above, and will not be repeated here.
  • T1 and S1 can be referred to as a group of image frames with parallax.
  • T2 and S2, T3 and S3, T4 and S4, T5 and S5 are respectively a group of image frames with visual difference. It can be understood that when more cameras are used for shooting, the electronic device 100 acquires more image frames with parallax.
  • the electronic device 100 can conveniently and quickly acquire images with visual difference without requiring the user to adjust the position of the mobile phone A or move its own position.
  • the images with visual difference acquired by the electronic device 100 may also include images acquired by cameras at different times.
  • the electronic device 100 may also acquire continuous multiple frames of images as a group of image frames, and then fuse the multiple frames of images to obtain an image shielding the selfie stick or other image content.
  • S103 The electronic device 100 locates the selfie stick in the image frame T1.
  • the electronic device 100 After acquiring T1 and S1, the electronic device 100 first locates the selfie stick in the image frame T1. Specifically, the electronic device 100 may use an image segmentation algorithm to perform image segmentation on T1 respectively to obtain an edge image (mask).
  • edge image refers to an image in which image content is marked and distinguished by using edge contours of different objects in the image as boundaries.
  • the electronic device 100 may determine contents included in T1, and determine edges of each contents.
  • the image content in T1 includes: a person 10 and a selfie stick 20 .
  • the content of the edge image T1-mask of T1 obtained by the electronic device 100 includes: a selfie stick 601 , a portrait 602 and a background 603 .
  • the image segmentation algorithm used by the electronic device 100 is an image semantic segmentation algorithm.
  • the algorithm includes self-encoder (encoder) and self-decoder (decoder).
  • the above autoencoders and autodecoders are built based on deep neural networks.
  • the autoencoder is a data compression algorithm that can be used to extract the features of the input image.
  • the autodecoder is the reverse reconstruction of the autoencoder, which is the reverse decoding of the deep feature space.
  • Figure 6B shows the structure of the autoencoder.
  • the autoencoder includes a front end 60 and a back end 61 .
  • the front end 60 includes a three-layer convolutional network (conv2).
  • the backend is jointly composed of Block1 and Block2.
  • Block1 includes a two-layer convolutional network (conv2)
  • Block2 includes a three-layer convolutional network (conv2).
  • the electronic device 100 may first input T1 into an autoencoder of the image segmentation algorithm to extract image features of T1.
  • the image frames generated by cam1 and cam2 are three-channel RGB images.
  • the above three-channel RGB image can be converted into a 196-channel high-dimensional feature.
  • Fig. 6C shows the structure of the self-decoder.
  • the self-decoder includes a front end 62 and a back end 63 .
  • the front end 62 includes three layers of upsampling modules (upsamples).
  • the upsampling module consists of a three-layer convolutional network (conv2) and a layer of linear interpolation.
  • the backend 63 is a one-layer convolutional network (conv2).
  • the electronic device 100 can input the reconstructed T1 to the self-decoder, and reversely reconstruct the characterized T1, thereby restoring the above-mentioned 196-channel high-dimensional features into 3-channel RGB image, and mark the segmentation results in this image.
  • the electronic device 100 can obtain the edge image shown by T1-mask.
  • the image content displayed in the T1-mask includes: a selfie stick 601 , a portrait 602 and a background 603 . Further, the electronic device 100 can locate the selfie stick 601 in the picture.
  • the electronic device 100 can also use other image segmentation algorithms to locate the selfie stick in the image frame, such as VGGnet based on feature coding, ResNet, convolutional neural network based on region selection Network (Region-based Convolutional Neural Network, R-CNN) and traditional region growing algorithms, segmentation methods based on edge detection, etc.
  • image segmentation algorithms such as VGGnet based on feature coding, ResNet, convolutional neural network based on region selection Network (Region-based Convolutional Neural Network, R-CNN) and traditional region growing algorithms, segmentation methods based on edge detection, etc.
  • R-CNN region selection Network
  • the electronic device 100 uses the image frame S1 to replace the selfie stick in T1 to obtain a masked image.
  • the selfie stick 501 in the above T1 may be called a shielding object, that is, the image content that the electronic device 100 needs to remove.
  • the pixel area marked in S1 corresponding to the pixel position of the selfie stick in T1 may be called a replacement material, that is, the pixel area used to replace the selfie stick in T1.
  • FIG. 7 exemplarily shows a schematic diagram of electronic device 100 replacing a shielded object with a replacement material. Next, the above replacement process will be described with reference to FIG. 7 .
  • the electronic device 100 can cut S1 according to the pixel position of the selfie stick in T1, and determine the pixel point area of the replacement material.
  • the electronic device 100 may mark the pixel at the same position in S1 , referring to S1 ′ in the image.
  • S1 is an image frame with visual difference collected by cam2 at the same moment as T1.
  • the foregoing embodiments have already introduced S1 in detail, so details are not repeated here.
  • the electronic device 100 may determine the area 60 .
  • the area 60 may represent a replacement material, and the pixels included in the area 60 correspond one-to-one to the pixels marked as the selfie stick 601 in T1.
  • the electronic device 100 can replace the pixel marked as selfie stick 601 in T1 with the image content marked as replacement material in S1. For example, the electronic device 100 can delete the image content marked as the selfie stick 601 in T1, and then move the image content in the area 60 in S1 to T1, so that the electronic device 100 can obtain the image frame without the selfie stick 601 , as shown in T1'.
  • the content in the image frame T1' includes: the person 602, the background 603, and does not include the selfie stick 601, that is, the selfie stick 601 in T1 is removed.
  • T1 and S1 are images collected by the camera of the exemplary electronic device 100 .
  • the electronic device 100 may also use the image content in T1 to replace the content to be blocked in S1.
  • the size of the replacement material may be larger than the size of the masking object.
  • the edge aliasing phenomenon refers to the phenomenon that the area of the masked object positioned by the image segmentation algorithm does not completely coincide with the position of the actual object in the original image, and the masked object is not completely eliminated.
  • the electronic device 100 may also perform self-filling on the replaced image.
  • self-filling refers to filling dirty pixels with pixels near the dirty pixels.
  • the above-mentioned dirty pixels refer to the masking objects left after the replacement, that is, the pixels where the masking objects have not been completely removed.
  • the electronic device 100 may also determine the similarity between the image content in S1 and T1.
  • the selfie stick may be referred to as a second device, and the selfie stick in the image may be referred to as a first object.
  • an operation acting on the control 314 may be referred to as a second user operation.
  • the user interface shown in FIG. 3C may be referred to as displaying a third image.
  • the third image displayed in control 314 may be referred to as displaying a thumbnail of the third image.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 displays the user interface (FIG. 3A-FIG. 3C), the preview image captured by the camera, and the photo or video taken by the user, etc., can be completed through the GPU, the display screen 194, and the application processor.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the electronic device 100 includes two or more cameras 193 .
  • the image captured by the camera 193 and processed by the ISP may be directly sent to the GPU for display on the display screen 194 . Then, the electronic device 100 performs masking processing on the original image according to the specific shooting content of the user.
  • the electronic device 100 can process the image collected by the camera 193 before sending it for display, so that the image displayed by the GPU, the display screen 194, etc. does not include Image of masked object.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
  • RAM random access memory
  • NVM non-volatile memory
  • Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.
  • Non-volatile memory may include magnetic disk storage devices, flash memory (flash memory).
  • flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
  • it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc.
  • SLC single-level storage cells
  • MLC multi-level storage cells
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • embedded multimedia memory card embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
  • the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
  • the external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
  • image data such as photos and videos taken by the user can be stored in the non-volatile memory connected to the external memory interface 120, so as to be used by the user at any time.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the electronic device 100 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the electronic device 100 detects whether the touch sensor 180K is used for a user operation on the display screen 194 (click operation, double-click operation, long-press operation, etc.).
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • electronic devices such as mobile phones and tablet computers can collect images with visual difference through two or more cameras. Then, the electronic device can cut and replace the images captured by different cameras based on the above-mentioned visual difference, so as to shield the content that the user does not expect in the images, so as to improve the shooting experience of the user.
  • a group of image frames captured by two or more cameras naturally have parallax, so that users do not need to adjust the position of the electronic device or themselves to realize the use of parallax to shield the image content.
  • UI user interface
  • the term "user interface (UI)" in the specification, claims and drawings of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the internal form of information Conversion to and from a form acceptable to the user.
  • the user interface of the application program is the source code written in specific computer languages such as java and extensible markup language (XML). Such as pictures, text, buttons and other controls.
  • Control also known as widget (widget)
  • Typical controls include toolbar (toolbar), menu bar (menu bar), text box (text box), button (button), scroll bar (scrollbar), images and text.
  • the properties and contents of the controls in the interface are defined through labels or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or property in the interface, and the node is parsed and rendered to present the content visible to the user.
  • the interfaces of many applications, such as hybrid applications usually include web pages.
  • a web page, also called a page, can be understood as a special control embedded in the application program interface.
  • a web page is a source code written in a specific computer language, such as hyper text markup language (GTML), cascading style Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • GTML hyper text markup language
  • cascading style Tables cascading style sheets, CSS
  • java scripts JavaScript, JS
  • the specific content contained in the web page is also defined by the tags or nodes in the source code of the web page.
  • GTML defines the elements and attributes of the web page through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI graphical user interface
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种基于多摄像头的图像内容屏蔽方法和装置。该方法可应用在手机、平板电脑等具备拍摄能力的电子设备上。实施本申请实施例提供的基于多摄像头的图像内容屏蔽方法,手机、平板电脑等电子设备可以通过两个或两个以上的摄像头采集具有视觉差的图像。然后,该电子设备可基于上述视觉差,对不同摄像头采集的图像进行剪切和替换,从而屏蔽图像中用户不期望出现的内容,以提升用户的拍摄体验。

Description

基于多摄像头的图像内容屏蔽方法和装置
本申请要求于2021年7月27日提交中国专利局、申请号为202110849366.6、申请名称为“基于多摄像头的图像内容屏蔽方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及基于多摄像头的图像内容屏蔽方法和装置。
背景技术
目前,用户使用手机等电子设备进行自拍的需求旺盛,因此还带来了自拍杆的出现。自拍杆有效地解决了自拍时构图受限的问题,尤其是在户外运动、多人自拍或者全身自拍的场景下。但是,使用自拍杆拍照也产生了新的问题:自拍杆出现在自拍照片中,影响用户体验。
发明内容
本申请提供了基于多摄像头的图像内容屏蔽方法和装置。该方法可应用在手机、平板电脑等具备拍摄能力的电子设备上。实施上述方法,手机、平板电脑等电子设备可以对两个或两个以上的摄像头采集具有视觉差的一组图像进行拼接,从而得到屏蔽特定图像内容的图像,以提升用户的拍摄体验。
第一方面,本申请提供了一种图像内容屏蔽方法,该方法应用于具备摄像头的电子设备,该方法包括:获取一组图像帧,一组图像帧至少包括第一图像、第二图像,第一图像和第二图像分别是电子设备的第一摄像头和第二摄像头在同一时刻采集的,第一摄像头和第二摄像头同时为电子设备的前置摄像头或后置摄像头;利用第二图像中的图像内容替换第一图像中的第一物体得到第三图像;显示第三图像。
实施第一方面提供的方法,电子设备可以同时使用多个前置摄像头或后置摄像头采集具有视觉差的一组图像帧。多个前置摄像头或后置摄像头在同一时刻采集的图像天然的具有视觉差,使用由于视觉差产生的图像对待屏蔽内容进行填补可以尽可能保留被待屏蔽内容遮挡的图像特征,提升屏蔽效果。同时,利用视觉差图像进行拼接来屏蔽某些图像内容可以降低计算量,提升处理效率。
结合第一方面的实施例,在一些实施例中,利用第二图像中的图像内容替换第一图像中的第一物体得到第三图像,具体包括:利用第二图像中第一区域的图像内容替换第一图像中第二区域的图像内容得到第三图像,第二区域为第一图像中显示第一物体的区域,第一区域与第二区域对应。
结合第一方面的实施例,在一些实施例中,第一区域与第二区域对应,包括:第一区域与第二区域大小相同,且位置相同;或,第一区域大于第二区域,第一区域的中心位置与第二区域的中心位置相同,且第一区域覆盖第二区域。
实施上述实施例提供的方法,第一区域可以刚好与第一物体对应的区域相同,即与第二区域相同,这样,电子设备可以使用第一区域的图像内容填补待屏蔽的第一物体。第一 区域还可以比第二区域大,这样,第一区域不仅可以填补第二区域中的第一物体,还可避免屏蔽不完全引起的边缘锯齿现象,提升屏蔽效果。
结合第一方面的实施例,在一些实施例中,在获取一组图像帧之前,该方法还包括:检测到第一用户操作。
实施上述实施例提供的方法,电子设备可以检测用户的操作,响应于该操作,电子设备可执行本申请提供的图像屏蔽算法。这样,电子设备可以根据用户需求执行该图像屏蔽算法,提升图像屏蔽功能应用的针对性,降低电子设备的无效计算。
结合第一方面的实施例,在一些实施例中,第一用户操作为:图像拍摄操作。
实施上述实施例提供的方法,电子设备可在检测到用户的拍摄操作后,再实施本申请提供的图像内容屏蔽方法。
结合第一方面的实施例,在一些实施例中,图像拍摄操作包括:电子设备检测到的图像拍摄操作,和/或,第二设备检测到的图像拍摄操作;电子设备检测到的图像拍摄操作包括:作用于拍摄控件的操作,和,作用在按键上的操作,拍摄控件显示在电子设备提供的用户界面上,第二设备与第一设备相连。
结合第一方面的实施例,在一些实施例中,第二设备在第一图像中的成像为第一物体。
结合第一方面的实施例,在一些实施例中,显示第三图像,具体包括:响应于第一用户操作,保存第三图像;检测到第二用户操作,显示第三图像或第三图像的缩略图。
实施上述实施例提供的方法,响应于第一用户操作,电子设备可保存第三图像,然后,电子设备可以检测第二用户操作,响应于该操作,电子设备可显示该第三图像。
结合第一方面的实施例,在一些实施例中,第二用户操作包括:在图库应用中显示第三图像的操作,或,第三方应用程序调用第三图像的操作。
实施上述实施例提供的方法,在浏览图库应用程序的场景下,电子设备可显示该第三图像;在调用图库中存储的图像发送给其他电子设备的过程中,电子设备可显示该第三图像。
结合第一方面的实施例,在一些实施例中,第一用户操作为:打开拍摄预览界面的操作。
实施上述实施例提供的方法,电子设备可在检测到用户的调用摄像头进行拍摄的操作后,就实施本申请提供的图像内容屏蔽方法。
结合第一方面的实施例,在一些实施例中,打开拍摄预览界面的操作包括:从第一应用程序图标进入拍摄预览界面的操作,第一应用程序为电子设备提供的应用程序;或,从第三方应用程序提供的第一控件进入拍摄预览界面的操作。
实施上述实施例提供的方法,在检测到用户打开相机应用程序的场景下,电子设备可实施本申请提供的图像内容屏蔽方法。在运行其他应用程序的场景下,在检测到调用电子设备的摄像头后,电子设备可实施本申请提供的图像内容屏蔽方法。
结合第一方面的实施例,在一些实施例中,在检测到第一用户操作之后,该方法还包括:显示拍摄预览界面,拍摄预览界面包括预览窗,预览窗用于实时显示电子设备的摄像头采集的图像;显示第三图像,具体包括:在预览窗中显示第三图像。
实施上述实施例提供的方法,电子设备可以在显示拍摄预览界面时,就在预览窗显示 屏蔽处理后的图像。这样,用户在拍摄之前就可以看到屏蔽处理后的拍摄效果。
结合第一方面的实施例,在一些实施例中,第二设备为自拍杆,第一物体为图像中的自拍杆。
实施上述实施例提供的方法,电子设备可以在预览窗或拍摄的照片或录像中屏蔽图像中出现的自拍杆,从而提升用户的拍摄体验。
结合第一方面的实施例,在一些实施例中,自拍杆包括杆体和夹持部,第一物体为图像中的自拍杆,具体包括:第一物体为图像中的自拍杆的杆体。
第二方面,本申请提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
第三方面,本申请提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
第四方面,本申请提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。
可以理解地,上述第二方面提供的电子设备、第三方面提供的计算机存储介质、第四方面提供的计算机程序产品均用于执行本申请所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
图1是本申请实施例提供的电子设备的示意图;
图2是本申请实施例提供的一组具有视觉差的图像;
图3A-图3C是本申请实施例提供的一组用户界面;
图4是本申请实施例提供的一种屏蔽图像内容的流程图;
图5是本申请实施例提供的电子设备获取一组具有视觉差的图像的示意图;
图6A是本申请实施例提供的电子设备识别图像中自拍杆的示意图;
图6B、图6C是本申请实施例提供的编码器和解码器的结构图;
图7是本申请实施例提供的一种屏蔽图像内容的示意图;
图8是本申请实施例提供的电子设备的硬件结构图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。
随着手机、平板电脑等移动智能终端拍摄能力的提升,尤其是景深摄像头(depth of field,DOF)、TOF(Time of flight)摄像头等具备深感能力(Deep ability)的摄像头在移动智能终端上的应用,手机、平板电脑等移动智能终端已经可以拍摄包含深度信息的图像, 例如RGBD类型的图像。上述具备深感能力的摄像头可称为深感摄像头(Deep sense camera)。
深感摄像头拍摄的图像可包括图像中物体的深度信息。图像处理模块可根据上述深度信息生成自拍杆的3D模型,进而,确定自拍图像中的自拍杆区域。然后,图像处理模块可对自拍杆区域进行屏蔽处理,从而使得用户得到的图像中不出现自拍杆,提升用户的拍摄体验。
然而,使用深感摄像头屏蔽自拍杆的方法需要付出的计算成本较大。特别的,若将该方法应用在视频类型的图像处理中,电子设备的计算量增长显著。相应地,计算的时间成本也明显增加。对用户而言,较长的计算时间极其影响用户的使用体验。
因此,为了避免自拍杆等用户不期望出现在图中的内容出现在照片或视频中影响用户拍摄体验,同时,降低图像处理的计算量,降低计算的时间成本,本申请实施例提供了一种基于多摄像头的图像内容屏蔽方法。该方法可应用在手机、平板电脑等具备拍摄能力的电子设备上。
实施该方法,手机、平板电脑等电子设备(电子设备100)可以通过两个或两个以上的摄像头采集图像。由于不同摄像头的空间位置不同,因此,不同摄像头采集的同一时刻的图像具有视觉差。进而,电子设备100可基于上述视觉差,对不同摄像头采集的图像进行剪切和替换,从而屏蔽图像中用户不期望出现的部分内容,提升用户的拍摄体验。
同时,由于被处理的图像为普通的不包括深度信息的图像,因此,电子设备100进行屏蔽处理的计算量显著减少,时间成本也相应地显著降低。因此,该方法在视频类型的图像处理中也适用。
以手机A为电子设备100为例,图1示例性示出了手机A的示意图。
如图1所示,手机A包括两个摄像头,分别为cam1和cam2。上述cam1和cam2可以为手机A的一组前置摄像头,或一组后置摄像头。
在本申请实施例中,cam1和cam2可以为标准镜头。在启用前置摄像头进行拍摄时,cam1和cam2均可采集图像,并产生时序的图像帧流。cam1生成的图像帧流可称为stream1;cam2生成的图像帧流可称为stream2。当cam1和cam2为标准镜头时,cam1和cam2采集的图像为RGB类型的图像。
优选的,上述cam1和cam2为彩色摄像头。不限于彩色摄像头,cam1和cam2还可以是黑白摄像头。例如,cam1可以为彩色摄像头,cam2可以为黑白摄像头;或者cam1可以为黑白摄像头,cam2可以为彩色摄像头。此外,不限于cam1和cam2,手机A还可包括更多的摄像头。这里,更多的摄像头包括:更多的前置摄像头,和/或,更多的后置摄像头。进一步的,上述更多的摄像头可以为:标准镜头、广角镜头、长焦镜头、潜望式变焦镜头、景深镜头等。本申请实施例对此不做限制。
相应地,对于不同类型的摄像头,该摄像头采集的图像包括的信息不同。例如,当手机A还包括景深镜头时,手机A通过景深镜头可获取到RGBD类型的图像。RGBD类型的图像可记录有图像中物体的深度信息。
cam1和cam2之间间距为D。基于间距D,cam1和cam2采集的图像存在视觉差,参考双目视觉原理。图2示例性示出了不同摄像头采集的图像存在视觉差的示意图。
图2包括(左)图和(右)图。(左)图为cam1采集的一帧图像;(右)图为同一时刻cam2采集的一帧图像。(左)图和(右)图中的内容均包括自拍杆10、人物20。
在(左)图中,自拍杆10在图像中心偏左的位置,遮盖了用户的右腿。在(右)图中,自拍杆10在图像的中心位置,在用户的左右腿之间。上述自拍杆10在(左)图和(右)图中处于不同位置即体现了:不同摄像头在同一时刻采集的图像存在视觉差。
基于上述视觉差,在获取多帧具有视觉差的图像后,手机A可以对上述多帧具有视觉差的图像进行拼接,将上述多帧具有视觉差的图像融合为一张屏蔽了自拍杆的图像。上述融合后的图像可称为输出图像。在得到输出图像后,手机A可以将该图像存储下来,供用户进一步操作。上述操作包括编辑、浏览等等。
可以理解的,当间距D较小时,即cam1和cam2靠近,cam1和cam2的取景范围相差很小。这时,cam1和cam2在同一时刻采集的图像的视觉差较小。当间距D较大时,cam1和cam2的取景范围相差较大。这时,cam1和cam2在同一时刻采集的图像的视觉差较大。
这也就是说,对于手机A来说,当手机A中安装的cam1和cam2具有较大的间距时,其采集的图像之间的视觉差更显著。因此,基于视觉差屏蔽自拍杆的效果更好。
在其他实施例中,当手机A具备更多的摄像头时,手机A可以获取更多的图像帧(同一时刻的),且上述图像帧两两之间具有视觉差。这样,手机A可以对上述更多的图像进行融合,从而提升屏蔽效果。
可以理解的,当上述cam1和cam2同为前置摄像头时,这里的更多的摄像头是指更多的前置摄像头;更多的图像帧是指上述更多的前置摄像头采集的图像。反之,当上述cam1和cam2同为后置摄像头时,这里的更多的摄像头是指更多的后置摄像头;更多的图像帧是指上述更多的后置摄像头采集的图像。
不限于手机,电子设备100还包括平板电脑、智能手表、智能手环、运动相机、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备等具备拍摄能力的便携式手持设备。电子设备100的示例性实施例包括但不限于搭载
Figure PCTCN2022089274-appb-000001
Figure PCTCN2022089274-appb-000002
Linux或者其它操作***的便携式电子设备。
图3A-图3C示例性示出了一组拍照场景下的用户界面。下面,结合图3A-图3C所示的用户界面,介绍实施本申请实施例提供的基于多摄像头的图像内容屏蔽方法的场景。在图3A-图3C所述的实施例中,拍摄场景为自拍场景,用户期望屏蔽的图像内容为自拍杆。
电子设备100可检测到调用摄像头显示拍摄预览界面的用户操作,响应于该操作,电子设备100可显示图3A所示的拍摄预览界面。
上述调用摄像头显示拍摄预览界面的用户操作包括:作用于电子设备100自带的相机应用程序图标的操作,和作用于第三方应用提供的拍摄控件的操作。上述第三方应用提供的拍摄控件例如
Figure PCTCN2022089274-appb-000003
提供的拍照控件、视频通话控件等等。
图3A示出了使用电子设备100显示拍摄预览界面的用户界面31。如图3A所示,用户界面31包括预览窗311、模式栏312、拍摄按钮313、图像预览窗314。
预览窗311可用于实时显示摄像头采集的图像。这里,在预览窗311显示的图像可称为原始图像。可以理解的原始图像也经过了图像处理模块的一些基础处理,例如色温校准、线性纠正、噪声去除、坏点去除、内插、自动曝光控制等。但是,上述基础处理操作均为光信号转变为电信号并在手机等电子设备上成像所必须经过的处理。因此,在本申请实施例中,经过上述基础的图像处理并在预览窗311中显示的图像可称为原始图像。
模式栏312可用于选择拍摄模式。上述拍摄模式包括:夜景模块、人像模式、拍照模式、录像模式、专业模式等等。在图3A-图3C示出的拍照场景下的用户界面中,电子设备100正处理拍照模式中。
拍摄按钮313可用于接收用户拍摄照片的操作。当检测到控制电子设备100拍摄的用户操作时,响应于该操作,电子设备100可确定此时预览窗311中显示的图像帧为拍摄图像。
在确定预览窗311中显示的图像帧为拍摄图像后,电子设备100可将该拍摄图像存储至指定的存储空间。然后,电子设备100可在图像预览窗314中显示上述拍摄图片的缩略图。同时,电子设备100可接收作用于图像预览窗314的用户操作,为用户提供浏览拍摄图片的用户操作。
用户界面31还可包括其他控件,例如,区域315提供的多个用户调节拍摄画面的控件、切换按钮316等。切换按钮316可用于变更电子设备100的摄像头(前置摄像头,或后置摄像头)。可以理解的,图3A所示的用户界面31为一种可能的用户界面,不应构成对本申请实施例的限制。
当检测到控制电子设备100拍摄的用户操作时,响应于该操作,电子设备100可显示图3B所示的用户界面32。在本申请实施例中,在用户使用自拍杆自拍的场景下,上述控制电子设备100拍摄的用户操作包括:作用于图3A中拍摄按钮313的操作,和作用在自拍杆上的拍摄按钮上的操作。
在其他实施例中,上述控制电子设备100拍摄还包括作用于电子设备100的实体按键的拍摄操作等等。
此时,图像预览窗314可显示前一时刻响应于用户拍摄操作生成的拍摄图片的缩略图。然后,电子设备100可检测到作用于图像预览窗314的用户操作。响应于该操作,电子设备100可显示图3C所示的用户界面33。
如图3C所示,用户界面33为电子设备100上展示拍摄的照片的用户界面。用户界面33包括区域331。区域331可用于显示用户拍摄的图像,包括单帧图像和多帧图像。其中,单帧图像例如照片;多帧图像包括视频、GIF动图等。
此外,用户界面33还可包括其他控件,例如控件332、控件333等等。响应于作用在控件332的操作,手机A可显示相册中的更多图片。响应于作用在控件333的操作,手机A可显示区域331中的图像的详细信息。上述详细信息例如:拍摄时间、拍摄位置、图像大小(存储空间占用量)、分辨率以及存储路径等。本申请实施例对此不做限制。
对比预览窗311中的显示的图像和区域331中显示的拍摄后的图像,前者的图像中包括自拍杆10和人物20;后者的图仅包括人物20。这也就是说,在检测到用户的拍摄操作后,手机A可识别原始图像(即预览窗311中的显示的图像)中的自拍杆10,并将其从原始图像中去除,使得用户得到的照片中不包括自拍杆,进而提升用户的拍摄体验。
在其他实施例中,在检测到用户的拍摄操作并生成屏蔽处理后的图像后,电子设备100并不会立即显示上述屏蔽处理后的图像。此时,电子设备100可检测用户调用该图像的操作,例如在图库中显示上述图像的操作,将上述图像发送给其他电子设备的操作等等。在执行上述调用该图像的操作的过程中,电子设备100可显示该图像或该图像的缩略图。
不限于图3A-图3C所述的拍照场景,本申请实施例提供的基于多摄像头的图像内容屏蔽方法还可应用于拍摄视频的场景。
在拍摄视频的场景下,在检测到用户开始拍摄的操作后,预览窗311中的显示的图像将会被手机A记录并存储(视频)。在拍摄完成后,手机A可在图像预览窗314显示拍摄的视频的摘要。这里,视频摘要包括:视频的封面图片,和/或,指示视频类型图像文件的标志,例如时间条、播放图标。
在检测到作用于图像预览窗314的用户操作后,手机A可在区域331中显示上述用户拍摄的视频。在拍摄视频的过程中,预览窗311中显示的图像可包括自拍杆10和人物20。但是,手机A在区域331中显示的视频中,视频的图像可不包括自拍杆10。即,在拍摄完成后,手机A可识别视频中是否包括自拍杆10。若识别到自拍杆10,手机A可去除视频中的自拍杆10,然后将生成新的视频。在该视频中,视频中的内容不包括自拍杆10。
手机A在录制视频的过程中虽然拍摄到了自拍杆,但是生成的视频文件中并不包括自拍杆。这样,用户可以直接得到屏蔽了自拍的自拍视频。
进一步的,在拍照场景中,或者在拍摄视频的场景中,手机A还可在预览窗311显示处理后的图像。具体的,在摄像头采集图像后,手机A即可对摄像头采集的图像进行屏蔽处理。然后,将屏蔽后的图像显示在预览窗311中。这样,拍摄完成后生成的视频与拍摄的过程中预览窗311中显示的图像流在图像内容上完全一致,即用户在拍摄过程中便能从预览窗311中看到屏蔽自拍后的拍摄效果,从而提升用户的拍摄体验。
下面,结合图4介绍本申请实施例提供的基于多摄像头的图像内容屏蔽方法的流程。
S101:电子设备100检测到用户正在使用摄像头进行拍摄。
这里,使用摄像头进行拍摄包括使用前置摄像头进行自拍和使用后置摄像进行一般的拍摄。通常情况下,自拍杆出现在拍摄画面中往往发生在自拍场景中。
因此,下面主要以自拍场景为例,介绍电子设备100确定用户正在使用摄像头自拍的场景。在本申请实施例中,自拍场景包括具体包括拍照(图像帧)和拍视频(图像帧流)。
在自拍场景下,电子设备100的工作状态包括:运行具备拍摄功能的应用程序,且该应用正在调用电子设备100的前置摄像头。因此,电子设备100可通过前置摄像头是否处 于工作状态判断用户是否正在自拍。
前置摄像头处于工作状态可以指示自拍场景,但不一定指示使用自拍杆自拍的场景。因此,进一步的,电子设备100还可检测是否有自拍杆接入。当检测到已连接自拍杆时,电子设备100可确认用户正在使用自拍杆自拍。
具体的,电子设备100上设置有外接接口;自拍杆包括控制拍照的按钮和连接线。上述外接接口包括3.5mm耳机接口、Type-C接口、Lighting接口等。本申请实施例对此不做限制。自拍杆的连接线可与上述外接接口连接。然后,用户可通过自拍杆上的拍摄按钮控制电子设备100的拍摄任务。
因此,在开启前置摄像头的情况下,当检测外接接口连接有其他设备时,电子设备100可确定用户正在使用自拍杆自拍。参考图3A,当电子设备100显示图3A所示的用户界面31时,电子设备100可确定用户正在处于自拍的场景。
在自拍场景之外的其他拍摄场景下,电子设备100也可实施本申请实施例提供的基于多摄像头的图像内容屏蔽方法。例如,在拍摄单人照的场景中,用户可选择去除偶然出现在取景范围内的其他人物。这也就是说,拍摄的场景不限于自拍场景,也可以是使用后置摄像头的拍摄普通拍摄场景。其次,被屏蔽的对象不限于自拍杆,也可以是其他的预设的电子设备100可以识别的图像内容。本申请实施例对此不作限制。
S102:电子设备100获取具有视觉差的一组图像帧。
在识别到使用自拍杆自拍的场景后,电子设备100可启用本申请实施例提供的自拍杆屏蔽方法。这也就是说,实施本申请实施例提供的多摄像头的自拍杆屏蔽方法是可选的。
在一些实施例中,电子设备100可以在任何拍摄的场景均实施上述方法,识别并屏蔽图像中的自拍杆。在其他实施例中,电子设备100也可以对拍摄场景进行进一步的细化。在识别到预设的细化场景后,电子设备100再实施上述方法,识别并屏蔽图像中的自拍杆。例如,S101中介绍的确定用户处于自拍场景,或进一步的,确定用户处于使用自拍杆进行自拍的场景等等。
这样,电子设备100可以针对性的开启该功能,避免计算资源滥用。
在启用本申请实施例提供的多摄像头的自拍杆屏蔽方法后,电子设备100可获取同一时刻的不同摄像头采集的具有视觉差的一组图像帧。具体的,以自拍场景中,处于工作状态的cam1和cam2为例,图5示例性示出了cam1和cam2采集图像生成图像帧流的示意图。
如图5所示,stream1可表示cam1采集并生成的图像帧流;stream2可表示cam2采集并生成的图像帧流。stream1和stream2分别包括多个连续的图像帧,且stream1和stream2中的多个图像帧是时序对齐的。这里,时序对齐是指:stream1中的任意一张图像帧与stream2中对应位置上的图像帧的时间戳相同。即,stream1和stream2中相同位置上的图像帧分别为同一时刻的cam1和cam2采集的。
具体的,stream1包括图像帧T1、T2、T3、T4和T5;stream2包括图像帧S1、S2、S3、 S4和S5。其中,T1和S1分别为cam1和cam2在同一时刻采集的图像。T2、T3、T4、T5和S2、S3、S4、S5的关系可参考上述T1和S1的介绍,这里不再赘述。
上述T1和S1可称为具有视觉差的一组图像帧。同样的,T2和S2、T3和S3、T4和S4、T5和S5分别为具有视觉差的一组图像帧。可以理解的,在使用更多摄像头拍摄时,电子设备100获取的具有视觉差的图像帧更多。
这样,电子设备100可以方便快捷的获取具有视觉差的图像,而不需要用户调整手机A的位置,或移动自身的位置。
由于在非运动的拍摄状态下,连续的两帧或多帧的图像内容一般不会出现较大的差距。因此,电子设备100获取的具有视觉差的图像还可包括不同时刻摄像头采集的图像。
在其他实施例中,电子设备100还可获取连续的多帧图像作为一组图像帧,然后对上述多帧图像进行融合,得到屏蔽自拍杆或其他图像内容的图像。
上述连续的多帧图像包括:stream1中连续的两帧或两帧以上的图像,和stream2中两帧或两帧以上的图像。具体的,参考图5,上述连续的多帧图像例如:T1、T2、S1、S2,或者包括:T1、T2、T3、S1、S2、S3。
这样,电子设备100可以获取更多的具有视觉差的图像,从而进一步提升屏蔽自拍杆等其他图像内容的屏蔽效果。
下面,以T1和S1这组图像帧为例,介绍电子设备100利用上述图像帧屏蔽图像中部分图像内容的方法。结合图2,T1的图像内容可以参考图2中的(左)图;S1的图像内容可以参考图2中的(右)图。
S103:电子设备100定位图像帧T1中的自拍杆。
在获取到T1和S1之后,电子设备100首先定位图像帧T1中的自拍杆。具体的,电子设备100可利用图像分割算法分别对T1进行图像分割,得到边缘图像(mask)。上述边缘图像是指:以图像内不同物体的边缘轮廓为边界,对图像内容进行标记区分的图像。
图6A示例性示出了电子设备100对图像帧T1进行分割的示意图。如图6A所示,左侧的图像帧为电子设备100从cam1采集的图像帧流中获取的图像帧T1(stream1中的图像帧T1);右侧的图像可表示电子设备100对T1进行图像分割后得到的边缘图像(T1-mask)。
利用图像分割算法,电子设备100可确定T1中包括的内容物,并确定各个内容物的边缘。以T1为例,T1中的图像内容包括:人物10和自拍杆20。通过图像分割,电子设备100得到的关于T1的边缘图像T1-mask的内容物包括:自拍杆601、人像602和背景603。
在本申请实施例中,电子设备100使用的图像分割算法为图像语义分割算法。该算法包括自编码器(encoder)和自解码器(decoder)。上述自编码器和自解码器是基于深度神经网络建立的。其中,自编码器是一种数据压缩算法,可用于提取输入图像的特征。自解码器是自编码器的反向重构,是对深层特征空间的反向解码。
图6B示出了自编码器的结构。如图6B所示,自编码器包括前端60和后端61。前端60包括三层卷积网络(conv2)。后端由Block1和Block2联合组成。其中,Block1包括两 层卷积网络(conv2),Block2包括三层卷积网络(conv2)。
在图像分割这一过程中,电子设备100首先可将T1输入图像分割算法的自编码器,提取T1的图像特征。cam1和cam2生成的图像帧为三通道RGB图像。在经过自编码器的拆分重构和特征提取后,上述三通道RGB图像可转化为196通道的高维特征。
图6C示出了自解码器的结构。如图6C所示,自解码器包括前端62和后端63。前端62包括三层上采样模块(upsample)。上采样模块包括三层卷积网络(conv2)和一层线性插值(linear interpolation)。后端63为一层卷积网络(conv2)。
在通过自编码对T1进行重构后,电子设备100可将重构后T1输入到自解码器,对特征化的T1进行反向重构,从而将上述196通道的高维特征恢复成3通道的RGB图像,并在该图像中标记分割结果。
参考图6A,在将图像帧T1经过图像分割算法处理后,电子设备100可得到T1-mask所示的边缘图像。T1-mask中显示的图像内容包括:自拍杆601、人像602和背景603。进一步的,电子设备100就可以定位图片中的自拍杆601。
可以理解的,图6B所示的自编码器、图6C所示的自解码器均为一种可能的自编码器结构和一种可能的自解码器结构,不应构成本申请实施例的限制。这也就是说,在其他实施例,自编码器和自解码器可具有不同的层级结构。这里,不同的层级结构包括不同数量的卷积层和不同方式的卷积层联合方式。
此外,不限于本申请实施例使用的图像语义分割算法,电子设备100定位图像帧中的自拍杆还可使用其他的图像分割算法,例如基于特征编码的VGGnet、ResNet、基于区域选择的卷积神经网络(Region-based Convolutional Neural Network,R-CNN)以及传统的区域生长算法、基于边缘检测的分割方法等等。本申请实施例对此不作限制。
S104:电子设备100使用图像帧S1替换T1中的自拍杆,得到屏蔽后的图像。
上述T1中的自拍杆501可称为屏蔽对象,即电子设备100需要去除的图像内容。S1中标记的与T1中自拍杆像素点位置对应的像素点区域可称为替换物料,即用于替换T1中的自拍杆的像素点区域。
图7示例性示出了电子设备100使用替换物料替换屏蔽对象的示意图。下面,结合图7说明上述替换过程。
首先,在确定T1中的自拍杆后,电子设备100可根据T1中自拍杆的像素点位置对S1进行切割,确定替换物料的像素点区域。
如图7所示,根据自拍杆601在T1中所在的位置,电子设备100可在S1中标记相同位置的像素点,参考图像中S1'。S1为cam2采集的与T1同一时刻的具有视觉差的图像帧。前述实施例已经详细介绍过S1了,这里不再赘述。于是,电子设备100可确定区域60。区域60可表示替换物料,且区域60包括的像素点与T1中标记为自拍杆601的像素点一一对应。
然后,电子设备100可使用S1中标记为替换物料的图像内容替换T1中标记为自拍杆 601的像素点。例如,电子设备100可将T1中标记为自拍杆601的图像内容消除,然后,将S1中区域60中的图像内容移入到T1中,这样,电子设备100可得到去除了自拍杆601的图像帧,如T1'所示。
此时,图像帧T1'中的内容物包括:人物602、背景603,而不再包括自拍杆601,即去除了T1中的自拍杆601。
可以理解的,T1、S1为示例性的电子设备100的摄像头采集的图像。本申请实施例对于cam1、cam2具体为电子设备100的哪一个摄像头不做限制。因此,在其他实施例中,电子设备100也可使用T1中的图像内容替换S1中的待屏蔽内容。
优选的,为了避免边缘锯齿现象,替换物料的尺寸可比屏蔽对象的尺寸大。边缘锯齿现象是指:图像分割算法定位的屏蔽对象的区域与原始图像中实际物体的位置不完全重合引起的,屏蔽对象消除不完全的现象。
在本申请实施例中,替换物料的尺寸可在屏蔽对象的尺寸的基础上,向外延展10-20个像素点,从而使得替换物料可以完全覆盖原始图像中的屏蔽对象,从而避免边缘锯齿现象,提升屏蔽效果。
可以理解的,当cam1与cam2之间的距离D较小时,cam1采集的图像帧与cam2采集的图像帧之间的视觉差较小。这就可能导致替换物料中的图像内容包括屏蔽对象。这样,即便使用替换物料去填补屏蔽对象,也不能完全去除原始图像中的屏蔽对象。
因此,进一步的,电子设备100还可对替换后的图像进行自填补。这里,自填补是指使用脏像素点附近的像素点对脏像素点进行填补。上述脏像素点是指在替换后遗留的屏蔽对象,即未完全去除屏蔽对象的像素点。
S105:电子设备100显示屏蔽后的图像。
在拍摄照片的场景中,当检测到作用于拍摄按钮的用户操作后,电子设备100可实施S102~S104所示的方法,然后,电子设备100可显示屏蔽后的图像。
结合图3A-图3C所示的用户界面。当电子设备100检测到作用拍摄按钮313的用户操作后,响应于该操作,电子设备100实施S102~S104所示的方法,可生成屏蔽自拍杆10的图像。进一步的,响应于作用在图像预览窗314的用户操作,电子设备100可显示上述屏蔽自拍杆10的图像,如图3C所示的用户界面33。
实施S101-S105所示的方法,电子设备100可在拍摄过程中,使用多个摄像头采集图像。然后,电子设备100可根据上述多个摄像头采集的图像的视觉差,对图像中的自拍杆进行屏蔽,从而得到不含自拍杆的图像,进而满足用户期望拍摄不含自拍杆的自拍图像的需求。
为了提升图像内容替换的准确性,在使用S1中的图像内容替换T1中自拍杆10之前,电子设备100还可判断S1与T1的图像内容的相似性。
若S1与T1中的图像内容明显不同,且电子设备100仍然使用S1中的图像来填补T1中需要屏蔽的图像内容,这样不仅不能实现屏蔽目的,反而使得处理后的图像比出处理前的图像更降低用户拍摄体验。
例如,当cam2被物体阻挡时,例如被手指遮挡,S1中大部分都是手指的时图像内容。此时若使用S1填补T1中的被屏蔽图像内容,则电子设备100从S1中获取的替换物料与被屏蔽图像内容很可能是不适配的,例如使用S1中手指的图像内容填补T1中自拍杆的图像内容,这样不仅不能实现屏蔽目的,反而使得处理后的图像比出处理前的图像更降低用户拍摄体验。
因此,若S1与T1中的图像内容明显不同,则电子设备100可视为S1不具备替换T1的能力,进一步的,电子设备100可使用被屏蔽图像内容附近的相似的图像内容填补该被屏蔽图像。
不限于使用自拍杆自拍的场景,本申请实施例提供的基于多摄像头的图像内容屏蔽方法还可用于其他拍摄场景中。例如,在拍摄单人照的场景中,用户可选择去除偶然出现在取景范围内的其他人物。本申请实施例对此不作限制。
在本申请实施例中:
电子设备100的多个前置摄像头(或多个后置摄像头)在同一时刻采集的一组图像帧可称为一组图像帧。其中,cam1可称为第一摄像头,cam2可称为第二摄像头;cam1采集的图像帧,例如图5所示的T1,可称为第一图像,cam2采集的图像帧,例如图5所示的S1,可称为第二图像。利用第二图像填补第一图像中部分图像内容后的图像可称为第三图像,例如图7所示的T1'。
自拍杆可称为第二设备,图像中的自拍杆可称为第一物体。
第二图像中用于填补第一图像的区域可称为第一区域,例如图7中S1'所示的区域60。第一图像中被屏蔽图像内容对应的区域可称为第二区域,例如图6A中图像帧T1中自拍杆10对应的区域。
参考图3A,作用于拍摄控件313的操作可称为电子设备检测到的作用于拍摄控件的操作。作用于电子设备100的电源键、音量键等实体按键的操作可称为电子设备检测到的作用在按键上的操作。用户作用于自拍杆上拍摄按钮的操作可称为第二设备检测到的图像拍摄操作。上述拍摄操作可称为第一用户操作。
参考图3B,作用于控件314的操作可称为第二用户操作。图3C所示的用户界面可称为显示第三图像。在控件314中显示的第三图像可称为显示第三图像的缩略图。
图8示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器 192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块 170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液 晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在本申请实施例中,电子设备100显示用户界面(图3A-图3C)、摄像头采集的预览图像以及用户拍摄的照片或视频等可通过GPU,显示屏194,以及应用处理器等完成。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
在本申请实施例中,电子设备100包括两个或两个以上的摄像头193。
在电子设备100在预览窗中显示原始图像的实施方式中,摄像头193采集的,经ISP处理的图像可直接被送至GPU,显示屏194显示。然后,电子设备100再根据用户的具体拍摄内容,对原始图像进行屏蔽处理。
在电子设备100在预览窗中显示屏蔽后的图像的实施方式中,电子设备100可在送显之前就对摄像头193采集的图像进行处理,从而使得GPU,显示屏194等显示的图像是不包括屏蔽对象的图像。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM) 和一个或多个非易失性存储器(non-volatile memory,NVM)。
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等。非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universal flash storage,UFS)、嵌入式多媒体存储卡(embedded multi media Card,eMMC)等。
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作***或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
在本申请实施例中,用户拍摄的照片、视频等图像数据可存储到外部存储器接口120连接的非易失性存储器中,从而供用户随时使用。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自 适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
在本申请实施例中,电子设备100通过触摸传感器180K检测是否用作用于显示屏194的用户操作(点击操作、双击操作、长按操作等等)。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。所述多张卡的类型可以相同,也可以 不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
实施本申请实施例提供的基于多摄像头的图像内容屏蔽方法,手机、平板电脑等电子设备可以通过两个或两个以上的摄像头采集具有视觉差的图像。然后,该电子设备可基于上述视觉差,对不同摄像头采集的图像进行剪切和替换,从而屏蔽图像中用户不期望出现的内容,以提升用户的拍摄体验。
特别的,利用两个或两个以上的摄像头采集的一组图像帧天然的具备视觉差,进而用户无需调整电子设备或自身的位置,就能实现利用视觉差屏蔽图像内容。
本申请的说明书和权利要求书及附图中的术语“用户界面(user interface,UI)”,是应用程序或操作***与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,GTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如GTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据 上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (18)

  1. 一种图像内容屏蔽方法,其特征在于,所述方法应用于具备摄像头的电子设备,所述方法包括:
    获取一组图像帧,所述一组图像帧至少包括第一图像、第二图像,所述第一图像和所述第二图像分别是所述电子设备的第一摄像头和第二摄像头在同一时刻采集的,所述第一摄像头和所述第二摄像头同时为所述电子设备的前置摄像头或后置摄像头;
    利用所述第二图像中的图像内容替换所述第一图像中的第一物体得到第三图像;
    显示所述第三图像。
  2. 根据权利要求1所述的方法,其特征在于,所述利用所述第二图像中的图像内容替换所述第一图像中的第一物体得到第三图像,具体包括:
    利用所述第二图像中第一区域的图像内容替换所述第一图像中第二区域的图像内容得到第三图像,所述第二区域为所述第一图像中显示所述第一物体的区域,所述第一区域与所述第二区域对应。
  3. 根据权利要求2所述的方法,其特征在于,所述第一区域与所述第二区域对应,包括:
    所述第一区域与所述第二区域大小相同,且位置相同;
    或,所述第一区域大于所述第二区域,所述第一区域的中心位置与所述第二区域的中心位置相同,且所述第一区域覆盖所述第二区域。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,在获取一组图像帧之前,所述方法还包括:检测到第一用户操作。
  5. 根据权利要求4所述的方法,其特征在于,所述第一用户操作为:图像拍摄操作。
  6. 根据权利要求5所述的方法,其特征在于,所述图像拍摄操作包括:
    所述电子设备检测到的图像拍摄操作,和/或,第二设备检测到的图像拍摄操作;
    所述电子设备检测到的图像拍摄操作包括:作用于拍摄控件的操作,和,作用在按键上的操作,所述拍摄控件显示在所述电子设备提供的用户界面上,所述第二设备与所述第一设备相连。
  7. 根据权利要求6所述的方法,其特征在于,所述第二设备在所述第一图像中的成像为所述第一物体。
  8. 根据权利要求4-7中任一项所述的方法,其特征在于,所述显示所述第三图像,具体包括:
    响应于所述第一用户操作,保存所述第三图像;
    检测到第二用户操作,显示所述第三图像或所述第三图像的缩略图。
  9. 根据权利要求8所述的方法,其特征在于,所述第二用户操作包括:在图库应用中显示所述第三图像的操作,或,第三方应用程序调用所述第三图像的操作。
  10. 根据权利要求4所述的方法,其特征在于,所述第一用户操作为:打开拍摄预览界面的操作。
  11. 根据权利要求10所述的方法,其特征在于,所述打开拍摄预览界面的操作包括:
    从第一应用程序图标进入所述拍摄预览界面的操作,所述第一应用程序为所述电子设备提供的应用程序;
    或,从第三方应用程序提供的第一控件进入所述拍摄预览界面的操作。
  12. 根据权利要求10或11所述的方法,其特征在于,在检测到第一用户操作之后,所述方法还包括:显示所述拍摄预览界面,所述拍摄预览界面包括预览窗,所述预览窗用于实时显示所述电子设备的摄像头采集的图像;
    所述显示所述第三图像,具体包括:在所述预览窗中显示所述第三图像。
  13. 根据权利要求1-12所述的方法,其特征在于,所述第二设备为自拍杆,所述第一物体为图像中的自拍杆。
  14. 根据权利要求13所述的方法,其特征在于,所述自拍杆包括杆体和夹持部,所述第一物体为图像中的自拍杆,具体包括:所述第一物体为图像中的自拍杆的杆体。
  15. 一种电子设备,其特征在于,包括一个或多个处理器和一个或多个存储器;其中,所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得执行如权利要求1-14任一项所述的方法。
  16. 一种芯片***,所述芯片***应用于电子设备,所述芯片***包括一个或多个处理器,所述处理器用于调用计算机指令以使得执行如权利要求1-14中任一项所述的方法。
  17. 一种包含指令的计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如权利要求1-14任一项所述的方法。
  18. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得执行如权利要求1-14任一项所述的方法。
PCT/CN2022/089274 2021-07-27 2022-04-26 基于多摄像头的图像内容屏蔽方法和装置 WO2023005298A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110849366.6A CN113747058B (zh) 2021-07-27 2021-07-27 基于多摄像头的图像内容屏蔽方法和装置
CN202110849366.6 2021-07-27

Publications (1)

Publication Number Publication Date
WO2023005298A1 true WO2023005298A1 (zh) 2023-02-02

Family

ID=78729260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089274 WO2023005298A1 (zh) 2021-07-27 2022-04-26 基于多摄像头的图像内容屏蔽方法和装置

Country Status (2)

Country Link
CN (1) CN113747058B (zh)
WO (1) WO2023005298A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747058B (zh) * 2021-07-27 2023-06-23 荣耀终端有限公司 基于多摄像头的图像内容屏蔽方法和装置
CN117119285A (zh) * 2023-02-27 2023-11-24 荣耀终端有限公司 一种拍摄方法
CN117119284A (zh) * 2023-02-27 2023-11-24 荣耀终端有限公司 一种拍摄方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580882A (zh) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 拍照的方法及其装置
JP2016220051A (ja) * 2015-05-21 2016-12-22 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
CN106791393A (zh) * 2016-12-20 2017-05-31 维沃移动通信有限公司 一种拍摄方法及移动终端
CN107493429A (zh) * 2017-08-09 2017-12-19 广东欧珀移动通信有限公司 自拍照片的自拍杆屏蔽方法和装置
CN113747058A (zh) * 2021-07-27 2021-12-03 荣耀终端有限公司 基于多摄像头的图像内容屏蔽方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401336B2 (en) * 2001-05-04 2013-03-19 Legend3D, Inc. System and method for rapid image sequence depth enhancement with augmented computer-generated elements
JP2010041586A (ja) * 2008-08-07 2010-02-18 Olympus Corp 撮像装置
US8964025B2 (en) * 2011-04-12 2015-02-24 International Business Machines Corporation Visual obstruction removal with image capture
JP6104066B2 (ja) * 2013-06-18 2017-03-29 キヤノン株式会社 画像処理装置および画像処理方法
US10419666B1 (en) * 2015-12-29 2019-09-17 Amazon Technologies, Inc. Multiple camera panoramic images
GB2560306B (en) * 2017-03-01 2020-07-08 Sony Interactive Entertainment Inc Image processing
US10586308B2 (en) * 2017-05-09 2020-03-10 Adobe Inc. Digital media environment for removal of obstructions in a digital image scene
CN112149458A (zh) * 2019-06-27 2020-12-29 商汤集团有限公司 障碍物检测方法、智能驾驶控制方法、装置、介质及设备
CN111797836B (zh) * 2020-06-18 2024-04-26 中国空间技术研究院 一种基于深度学习的地外天体巡视器障碍物分割方法
CN112584040B (zh) * 2020-12-02 2022-05-17 维沃移动通信有限公司 图像显示方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580882A (zh) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 拍照的方法及其装置
JP2016220051A (ja) * 2015-05-21 2016-12-22 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
CN106791393A (zh) * 2016-12-20 2017-05-31 维沃移动通信有限公司 一种拍摄方法及移动终端
CN107493429A (zh) * 2017-08-09 2017-12-19 广东欧珀移动通信有限公司 自拍照片的自拍杆屏蔽方法和装置
CN113747058A (zh) * 2021-07-27 2021-12-03 荣耀终端有限公司 基于多摄像头的图像内容屏蔽方法和装置

Also Published As

Publication number Publication date
CN113747058B (zh) 2023-06-23
CN113747058A (zh) 2021-12-03

Similar Documents

Publication Publication Date Title
CN112130742B (zh) 一种移动终端的全屏显示方法及设备
WO2020168956A1 (zh) 一种拍摄月亮的方法和电子设备
CN113726950B (zh) 一种图像处理方法和电子设备
CN109559270B (zh) 一种图像处理方法及电子设备
CN111669459B (zh) 键盘显示方法、电子设备和计算机可读存储介质
WO2023005298A1 (zh) 基于多摄像头的图像内容屏蔽方法和装置
CN111190681A (zh) 显示界面适配方法、显示界面适配设计方法和电子设备
US20230276014A1 (en) Photographing method and electronic device
CN112532892B (zh) 图像处理方法及电子装置
WO2022143128A1 (zh) 基于虚拟形象的视频通话方法、装置和终端
WO2022262313A1 (zh) 基于画中画的图像处理方法、设备、存储介质和程序产品
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2022262475A1 (zh) 拍摄方法、图形用户界面及电子设备
WO2023015990A1 (zh) 点光源图像检测方法和电子设备
WO2023284715A1 (zh) 一种物体重建方法以及相关设备
WO2021057626A1 (zh) 图像处理方法、装置、设备及计算机存储介质
CN113452969A (zh) 图像处理方法和装置
CN115115679A (zh) 一种图像配准方法及相关设备
US20240031675A1 (en) Image processing method and related device
CN114283195B (zh) 生成动态图像的方法、电子设备及可读存储介质
CN113542574A (zh) 变焦下的拍摄预览方法、终端、存储介质及电子设备
WO2022033344A1 (zh) 视频防抖方法、终端设备和计算机可读存储介质
WO2022062985A1 (zh) 视频特效添加方法、装置及终端设备
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22847916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE