WO2023109299A1 - Image processing method and apparatus, and device and storage medium - Google Patents

Image processing method and apparatus, and device and storage medium Download PDF

Info

Publication number
WO2023109299A1
WO2023109299A1 PCT/CN2022/125963 CN2022125963W WO2023109299A1 WO 2023109299 A1 WO2023109299 A1 WO 2023109299A1 CN 2022125963 W CN2022125963 W CN 2022125963W WO 2023109299 A1 WO2023109299 A1 WO 2023109299A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
image
target image
processed
Prior art date
Application number
PCT/CN2022/125963
Other languages
French (fr)
Chinese (zh)
Inventor
郑亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023109299A1 publication Critical patent/WO2023109299A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, and in particular, to an image processing method, device, device, and storage medium.
  • the images or videos captured by the terminal's camera contain personal private data information (such as shooting time, shooting location, face/iris and other biological information), and these private data information may be obtained by third-party software or others, resulting in Leakage of personal privacy of end users.
  • personal private data information such as shooting time, shooting location, face/iris and other biological information
  • Embodiments of the present disclosure provide an image processing method, device, equipment, and storage medium.
  • an embodiment of the present disclosure provides an image processing method, including: acquiring a first target image; determining a first scene corresponding to the first target image; A target object to be processed corresponding to a scene; and processing the target object in the first target image so that there is a difference between the processed first target image and the unprocessed first target image .
  • an embodiment of the present disclosure further provides an image processing device, including: an image acquisition module, configured to acquire a first target image; a scene determination module, configured to determine a first scene corresponding to the first target image; An object determination module, configured to determine from the first target image a target object to be processed corresponding to the first scene; and an object processing module, configured to perform processing on the target object in the first target image processing so that there is a difference between the processed first target image and the unprocessed first target image.
  • an embodiment of the present disclosure further provides an electronic device, including: a processor and a memory, the processor is configured to execute an image processing method program stored in the memory, so as to implement the above image processing method.
  • the embodiments of the present disclosure further provide a storage medium, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors, so as to realize the above-mentioned image processing method.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Image acquisition module 20. Scene determination module; 30. Object determination module; 40. Object processing module;
  • the images and/or videos are processed by using third-party software, so as to erase personal privacy content contained in the images or videos.
  • processing privacy in the above-mentioned way is not only inconvenient to use but also may ignore some information. If the original picture is obtained by others, personal privacy information will also be leaked, which cannot effectively protect personal privacy.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • An image processing method provided by an embodiment of the present disclosure includes step S10 to step S50.
  • the first target image is collected by the user through the camera of the terminal, where the terminal includes but is not limited to a mobile phone, a tablet computer, and the like.
  • the user turns on the camera of the terminal, and takes pictures with the camera to acquire the first target image.
  • S20 Determine a first scene corresponding to the first target object.
  • the first scene is a preset privacy protection scene, and each privacy protection scene is correspondingly preset with a protection object tag.
  • the privacy protection scene may be a selfie scene, a meeting scene, a home scene, etc., wherein the privacy protection scene may be set according to actual needs, and no specific limitation is set in this embodiment.
  • the protected object tags include at least one of the following: human body tags, face tags, text tags, pet tags, furniture tags, car tags, sound tags, and location tags. Of course, the protected object tags can also be set according to actual needs.
  • the protection object label is a human body label
  • the object to be protected is the overall human-shaped area, including limbs, head, clothing, and accessories; Biological information such as irises shown
  • the protection object label is a text label
  • the object to be protected is the text information in the image, such as the content in the presentation, writing on the blackboard and notes and other text information
  • the protection object label is pet
  • the objects to be protected are animals that appear in the image, such as cats, dogs and other animals
  • the object to be protected is a furniture label
  • the objects to be protected are furniture such as sofas and decorative paintings that appear in the image
  • the protection object label is a car label
  • the objects to be protected are cars, buses, and large trucks that appear in the image
  • the protection object label is a sound label
  • the protection object tags corresponding to the selfie scene can be face tags and jewelry tags, etc.; when the privacy protection scene is a meeting scene, the protection object tags corresponding to the meeting scene It can be human body label, text label and sound label, etc.;
  • the protection object tags corresponding to the home scene can be human body tags, jewelry tags, furniture tags, pet tags, and location tags.
  • the preset protection object label for each privacy protection scene can be set according to the actual situation.
  • determining the first scene corresponding to the first target image in step S20 includes: receiving input scene information representing the first scene; and determining the first scene according to the scene information.
  • the user can input the scene information through the terminal, for example, the user can input the scene information through touch control, voice control, etc., and when the scene information is received, the scene information can be processed to obtain the first scene corresponding to the scene information .
  • the user may input scene information after the image is captured or when the image is captured.
  • the first scene can be determined in the following manner, as follows: acquire the second scene; close the permission object corresponding to the second scene; The first scene corresponding to the target image.
  • the second scene may be acquired according to the received scene information representing the second scene.
  • a rights object can be a sound object or a location object.
  • the second scenario does not include the permission object, there is no need to close the permission object corresponding to the second scenario.
  • the second scenario in this embodiment is consistent with the first scenario. For example, when the second scene is a home scene, when the user takes an image, input scene information to obtain the second scene corresponding to the scene information, and close the location object corresponding to the second scene. After the corresponding authority object is closed, the first target image is acquired, and the first target image is processed according to the second scene.
  • the method for determining the first scene in step S20 can also be determined in the following manner, as follows: acquiring the target image feature data of the first target image; The first scene corresponding to the image feature data.
  • the scene model can establish a corresponding scene model through machine learning of image feature data in an image, and can learn image feature data through a neural network to establish a corresponding scene model.
  • the scene model includes a plurality of different privacy protection scenes, That is, when the target image feature data of the first target image is successfully matched with the image feature data of the scene model, the first scene can be determined according to the target image feature data.
  • the method for determining the first scene in step S20 can also be determined in the following manner, as follows: receiving input scene information representing a custom scene; determining a custom scene according to the scene information; based on the scene configuration interface, receiving an input The tag information representing the protected object tag corresponding to the custom scene; determine the protected object tag according to the tag information; configure the custom scene based on the protected object tag; and determine the first target image corresponding to the first target image based on the configured custom scene Scenes.
  • the scene information corresponding to the custom scene can be input when the image is captured, or after the image capture is completed, that is, the label information of the protected object label corresponding to the custom scene can be input according to actual needs.
  • a scene configuration interface will pop up. Based on the scene interface, the label information representing the tag of the protected object input by the user is received, and the protected object tag corresponding to the tag information Configure custom scenarios. After configuring the custom scene, store the custom scene, and use the custom scene as a preset privacy protection scene, so as to facilitate the user to process the target image in the future.
  • S30 Determine a target object to be processed corresponding to the first scene from the first target image.
  • step S30 includes: based on the first scene, determining a protected object label corresponding to the first scene; and determining a target object to be processed corresponding to the protected object label from the first target image.
  • the target object to be processed is the protected object corresponding to the protected object label in the first scene, and the target object to be processed is visible.
  • the target object to be processed may be a protection object corresponding to a human body tag, an accessory tag, a furniture tag, and a pet tag.
  • the semantic segmentation technology of image processing may be used to identify the first target image, so as to obtain the target object to be processed corresponding to the first scene in the first target image.
  • the preset threshold can be set according to actual needs, and is not specifically limited in this embodiment.
  • S40 Process the target object in the first target image, so that there is a difference between the processed first target image and the unprocessed first target image.
  • the difference between the processed first target image and the unprocessed first target image means that the processed first target image has covered the target object.
  • processing the target object in the first target image in step S40 includes: performing masking processing on the target object in the first target image.
  • performing masking processing on the target object in the first target image includes: performing blurring processing on the target object; or adding preset elements on the target object to cover the target object.
  • the preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
  • the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
  • a method for blurring a target object to obtain a processed first target image is introduced.
  • i represents the sequence number of the target object to be processed
  • Mask i (x, y) represents the mask of the i-th target object to be processed
  • x represents the i-th target object to be processed The row coordinate of the pixel
  • y represents the column coordinate of the pixel in the i-th target object to be processed.
  • the fuzzy convolution kernel is:
  • F(i, j) is represented as a fuzzy convolution kernel of k*k size
  • the background image After blurring, the background image can be obtained:
  • bgimg(x, y) is represented as the background data at the pixel point (x, y);
  • srcimg(x+i, y+j) is represented as the pixel point (x, y) in the first target image before processing ) within the radius of the neighborhood is the data within r.
  • the processed first target image can be obtained:
  • dastimg(x,y) (1-Mask i (x,y))*srcimg(x,y)+Mask i (x,y)*bgimg(x,y)
  • dastimg(x, y) represents the image value at the pixel point (x, y) in the blurred first target image.
  • S50 Receive an input information clearing request, and clear the processed attribute information of the first target image based on the information clearing request.
  • the attribute information may include randomly generated time information, randomly generated location information, terminal information, exposure parameter information, and the like. Wherein, clearing the attribute information of the first target image after processing may completely remove the attribute information of the processed first image, or may replace the attribute information of the processed first image with a random number.
  • step S50 exit the camera of the terminal.
  • An image processing method provided in this embodiment processes the target object and authority object corresponding to the scene in the image according to the scene corresponding to the image, so as to protect the content related to the user's personal privacy in the image, thereby realizing In order to prevent the user's personal privacy from leaking in the original image.
  • FIG. 2 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure.
  • An image processing method provided by an embodiment of the present disclosure includes step S11 to step S61.
  • the video stream data is collected by the user through the camera of the terminal, where the terminal includes but is not limited to a mobile phone, a tablet computer, and the like.
  • the user turns on the camera of the terminal, and uses the camera to take pictures to obtain video stream data.
  • S21 Perform frame division processing on the video stream data to obtain a plurality of first target images before processing.
  • the video stream data is divided into frames according to the time sequence relationship to obtain a plurality of first target images before processing according to the time sequence relationship.
  • S31 Determine a first scene corresponding to a plurality of first target images before processing.
  • the first scene is a preset privacy protection scene, and the privacy protection scene is consistent with the above description, so this embodiment will not repeat it here.
  • determining the first scene corresponding to the plurality of first target images in step S31 includes: receiving input scene information representing the first scene; and determining the first scene according to the scene information.
  • the user can input the scene information through the terminal, for example, the user can input the scene information through touch control, voice control, etc., and when the scene information is received, the scene information can be processed to obtain the first scene corresponding to the scene information .
  • the user may input scene information after shooting the video or when shooting the video.
  • the first scene can be determined in the following manner: obtain the second scene; close the permission object corresponding to the second scene; and based on the second scene, determine the obtained first target image corresponding to the first scene.
  • the second scene may be acquired according to the received scene information representing the second scene.
  • the rights object can be a sound protection tag or a location protection tag.
  • the second scenario does not include the permission object, there is no need to close the permission object corresponding to the second scenario.
  • the second scenario in this embodiment is consistent with the first scenario. For example, when the second scene is a home scene, when the user shoots a video, the user receives the input scene information to obtain the second scene corresponding to the scene information, and closes the location object corresponding to the second scene. After the corresponding permission object is closed, the video stream data is obtained, and the video stream data is divided into frames, and multiple first target images before processing are obtained, and multiple first target images before processing are processed according to the second scene .
  • the first scene can be determined according to the following methods: obtain the third scene; process the preset audio corresponding to the third scene in the obtained video stream data; process the preset The video stream data after the audio is divided into frames to obtain a plurality of first target images before processing; and based on the third scene, a first scene corresponding to the plurality of first target images before processing is determined.
  • the third scene can be acquired according to the received scene information representing the third scene.
  • the preset audio may refer to an audio object included in the video stream data.
  • processing the preset audio in the video may refer to separating the image data and audio data in the video stream data, so as to remove the audio data, so as to achieve the purpose of muting the video.
  • the protection of personal privacy in the video is further strengthened by processing the rights object and preset audio corresponding to the scene in the video.
  • the method for determining the first scene in step S31 can also be determined in the following manner: acquiring the target image feature data of the first target image; corresponding to the first scene.
  • the method for determining the first scene can also be determined in the following manner: receiving the input scene information representing the custom scene; determining the custom scene according to the scene information; based on the scene configuration interface, receiving the input representation and the custom scene Tag information of the corresponding protected object tag; determine the protected object tag according to the tag information; configure a custom scene based on the protected object tag; and determine a first scene corresponding to the first target image based on the configured custom scene.
  • the scene information corresponding to the custom scene can be input when shooting, and can also be input when the video stream data is obtained, that is, the label information of the protected object label corresponding to the custom scene can be input according to actual needs .
  • a scene configuration interface will pop up. Based on the scene interface, the label information representing the tag of the protected object input by the user is received, and the protected object tag corresponding to the tag information Configure custom scenarios. After configuring the custom scene, store the custom scene, and use the custom scene as a preset privacy protection scene, so as to facilitate the user to process the target image in the future.
  • S41 Determine a target object to be processed corresponding to the first scene from each pre-processed first target image.
  • step S41 includes: based on the first scene, determining the protected object label corresponding to the first scene; and determining the target object to be processed corresponding to the protected object label from each pre-processed first target image.
  • the semantic segmentation technology of image processing may be used to identify each pre-processed first target image to obtain the target object to be processed corresponding to the label of the protected object in the first scene in the first target image before each processor.
  • the pending target object is visible.
  • S51 Process the target objects of each unprocessed first target image sequentially according to the time sequence relationship of the multiple unprocessed first target images obtained through the frame division processing, to obtain multiple processed first target images.
  • processing the target object in the first target image before processing in step S51 includes: performing masking processing on the target object in the first target image.
  • performing masking processing on the target object in the first target image in step S51 includes: performing blurring processing on the target object; or adding preset elements on the target object to cover the target object.
  • the preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
  • the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
  • the attribute information may include randomly generated time information, randomly generated location information, terminal information, exposure parameter information, and the like. Wherein, clearing the attribute information of the first target image after processing may completely remove the attribute information of the processed first image, or may replace the attribute information of the processed first image with a random number.
  • S61 Generate processed video stream data from a plurality of processed first target images based on a time sequence relationship, so that there is a difference between the processed video stream data and the unprocessed video stream data.
  • the processed video stream data can be generated after the multiple processed first target images are combined into frames.
  • the difference between the processed video stream data and the pre-processed video stream data means that the processed video stream data has covered the target object.
  • step S61 exit the camera of the terminal.
  • An image processing method provided in this embodiment processes the target object, authority object and preset audio corresponding to the scene in the video image according to the scene corresponding to the video, so as to protect the personal users involved in the video Privacy content, so as to prevent the user's personal privacy from leaking in the original video.
  • FIG. 3 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • An image processing device provided by an embodiment of the present disclosure includes an image acquisition module 10, a scene determination module 20, an object determination module 30, and an object processing module 40, wherein the image acquisition module 10 is used to acquire a first target object, and the scene determination module 20 For determining the first scene corresponding to the first target image, the object determination module 30 is used for determining the target object to be processed corresponding to the first scene from the first target image, and the object processing module 40 is used for processing the first target image The target object in is processed so that there is a difference between the processed first target image and the processed first target image.
  • the object determination module 30 is further configured to determine the protected object label corresponding to the first scene based on the first scene, and determine the target object to be processed corresponding to the protected object label from the first target image.
  • the protected object tags include at least one of the following: human body tags, face tags, text tags, pet tags, furniture tags, car tags, sound tags, and location tags.
  • the scene determining module 20 is further configured to receive input scene information representing the first scene; determine the first scene according to the scene information.
  • the scene determination module 20 is further configured to acquire target image feature data of the first target image, and determine the first scene corresponding to the target image feature data based on the correspondence between the image feature data and the scene model.
  • the object processing module 40 is further configured to perform masking processing on the target object in the first target image, so as to implement processing on the target object in the first target image.
  • the masking process includes at least one of the following: blurring the target object; or adding preset elements on the target object to cover the target object.
  • the preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
  • the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
  • the image acquisition module 10 is further configured to acquire video stream data, and process the video stream data into frames to obtain a plurality of first target images before processing.
  • the object processing module 40 is further configured to sequentially process the target objects of each pre-processed first target image according to the time sequence relationship of the multiple pre-processed first target images obtained through frame division processing, to obtain multiple A processed first target image; based on the timing relationship, a plurality of processed first target images are generated into processed video stream data, so that there is a difference between the processed video stream data and the pre-processed video stream data .
  • the scene determination module 20 is also used to obtain a second scene, close the authority object corresponding to the second scene based on the second scene, and determine the first object corresponding to the acquired first target image based on the second scene. Scenes.
  • the scene determination module 20 is also used to obtain the third scene, process the preset audio corresponding to the third scene in the obtained video stream data, and divide the video stream data after the preset audio is processed into frames and processing to obtain a plurality of first target images before processing, and determine a first scene corresponding to the plurality of first target images before processing based on the third scene.
  • the shooting device provided in this embodiment processes the objects corresponding to the scenes in the images and videos according to the scenes corresponding to the images and videos, so as to protect the content related to the user's personal privacy in the images, thereby realizing In the original image, the leakage of user's personal privacy is eliminated.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 400 shown in FIG. 4 includes: at least one processor 401 , a memory 402 , at least one network interface 404 and other user interfaces 403 .
  • Various components in the electronic device 400 are coupled together through the bus system 405 .
  • the bus system 405 is used to realize connection and communication between these components.
  • the bus system 405 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 405 in FIG. 4 .
  • the user interface 403 may include a display, a keyboard or a pointing device (for example, a mouse, a trackball (trackball), a touch panel or a touch screen, and the like.
  • a keyboard or a pointing device for example, a mouse, a trackball (trackball), a touch panel or a touch screen, and the like.
  • the memory 402 in the embodiment of the present disclosure may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • the volatile memory can be Random Access Memory (RAM), which acts as external cache memory.
  • RAM Static Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Synchronous Dynamic Random Access Memory Synchronous Dynamic Random Access Memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDRSDRAM
  • enhanced SDRAM ESDRAM
  • Synch link DRAM SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus RAM, DRRAM
  • Memory 402 described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the memory 402 stores the following elements, executable units or data structures, or their subsets, or their extended sets: an operating system 4021 and an application program 4022.
  • the operating system 4021 includes various system programs, such as framework layer, core library layer, driver layer, etc., for realizing various basic services and processing tasks based on hardware.
  • the application program 4022 includes various application programs, such as a media player (Media Player), a browser (Browser), etc., and is used to realize various application services. Programs for implementing the methods of the embodiments of the present disclosure may be included in the application program 4022 .
  • the processor 401 is used to execute the method steps provided by each method embodiment, for example, including: obtaining the first A target image; determine the first scene corresponding to the first target image; determine the target object to be processed corresponding to the first scene from the first target image; process the target object in the first target image, so that the processing There are differences between the first target image after processing and the first target image before processing.
  • the methods disclosed in the foregoing embodiments of the present disclosure may be applied to the processor 401 or implemented by the processor 401 .
  • the processor 401 may be an integrated circuit chip and has signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 401 or instructions in the form of software.
  • the above-mentioned processor 401 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the methods disclosed in the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software units in the decoding processor.
  • the software unit may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402, and completes the steps of the above method in combination with its hardware.
  • the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing device (DSPDevice, DSPD), programmable logic Device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor, other electronic devices for performing the functions described in this disclosure unit or its combination.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSPDevice digital signal processing device
  • PLD programmable logic Device
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the electronic device provided in this embodiment may be the electronic device shown in Figure 4, which can perform all the steps of the image processing method shown in Figure 1-2, and then realize the technical effect of the image processing method shown in Figure 1-2, please Referring to the relevant descriptions in FIGS. 1-2 , for the sake of brevity, details are not repeated here.
  • the embodiment of the present disclosure also provides a storage medium (computer-readable storage medium).
  • the storage medium here stores one or more programs.
  • the storage medium may include a volatile memory, such as a random access memory; the memory may also include a non-volatile memory, such as a read-only memory, a flash memory, a hard disk or a solid-state disk; the memory may also include the above-mentioned types of memory combination.
  • One or more programs in the storage medium can be executed by one or more processors, so as to realize the above image processing method executed on the shooting device side.
  • the processor is used to execute the shooting program stored in the memory, so as to realize the following steps of the image processing method executed on the shooting device side: acquiring the first target image; determining the first scene corresponding to the first target image; A target object to be processed corresponding to the first scene is determined in the image; and the target object in the first target image is processed so that there is a difference between the processed first target image and the unprocessed first target image.
  • An image processing method provided by an embodiment of the present disclosure includes: acquiring a first target image, determining a first scene corresponding to the first target image, and determining a target object to be processed corresponding to the first scene from the first target image , processing the target object in the first target image, so that there is a difference between the processed first target image and the unprocessed first target image.
  • the embodiment of the present disclosure processes the target object corresponding to the scene in the image, so as to protect the content related to the user's personal privacy in the image, so as to prevent the user's personal privacy in the original image Give way.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other Any other known storage medium.

Abstract

The embodiments of the present disclosure relate to an image processing method and apparatus, and a device and a storage medium. The image processing method comprises: acquiring a first target image; determining a first scene corresponding to the first target image; determining, from the first target image, a target object to be processed which corresponds to the first scene; and processing the target object in the first target image, so that there is a difference between the first target image after processing and the first target image before processing.

Description

图像处理方法、装置、设备及存储介质Image processing method, device, equipment and storage medium
相关申请的交叉引用Cross References to Related Applications
本公开要求享有2021年12月13日提交的名称为“一种图像处理方法、装置、设备及存储介质”的中国专利申请CN202111521605.1的优先权,其全部内容通过引用并入本公开中。This disclosure claims priority to the Chinese patent application CN202111521605.1 filed on December 13, 2021, entitled "An Image Processing Method, Device, Equipment, and Storage Medium", the entire content of which is incorporated by reference into this disclosure.
技术领域technical field
本公开实施例涉及计算机技术领域,尤其涉及一种图像处理方法、装置、设备及存储介质。Embodiments of the present disclosure relate to the field of computer technology, and in particular, to an image processing method, device, device, and storage medium.
背景技术Background technique
终端的照相机所拍摄的图像或视频中包含个人的隐私数据信息(例如:拍摄时间、拍摄地点、人脸/虹膜等生物信息),这些隐私数据信息可能会被第三方软件或他人获取,造成了终端用户个人隐私的泄露。The images or videos captured by the terminal's camera contain personal private data information (such as shooting time, shooting location, face/iris and other biological information), and these private data information may be obtained by third-party software or others, resulting in Leakage of personal privacy of end users.
发明内容Contents of the invention
本公开实施例提供一种图像处理方法、装置、设备及存储介质。Embodiments of the present disclosure provide an image processing method, device, equipment, and storage medium.
第一方面,本公开实施例提供一种图像处理方法,包括:获取第一目标图像;确定与所述第一目标图像对应的第一场景;从所述第一目标图像中确定与所述第一场景对应的待处理的目标对象;以及对所述第一目标图像中的所述目标对象进行处理,以使处理后的所述第一目标图像与处理前的所述第一目标图像存在差异。In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a first target image; determining a first scene corresponding to the first target image; A target object to be processed corresponding to a scene; and processing the target object in the first target image so that there is a difference between the processed first target image and the unprocessed first target image .
第二方面,本公开实施例还提供一种图像处理装置,包括:图像获取模块,用于获取第一目标图像;场景确定模块,用于确定与所述第一目标图像对应的第一场景;对象确定模块,用于从所述第一目标图像中确定与所述第一场景对应的待处理的目标对象;以及对象处理模块,用于对所述第一目标图像中的所述目标对象进行处理,以使处理后的所述第一目标图像与处理前的所述第一目标图像存在差异。In a second aspect, an embodiment of the present disclosure further provides an image processing device, including: an image acquisition module, configured to acquire a first target image; a scene determination module, configured to determine a first scene corresponding to the first target image; An object determination module, configured to determine from the first target image a target object to be processed corresponding to the first scene; and an object processing module, configured to perform processing on the target object in the first target image processing so that there is a difference between the processed first target image and the unprocessed first target image.
第三方面,本公开实施例还提供一种电子设备,包括:处理器和存储器,所述处理器用于执行所述存储器中存储的图像处理方法程序,以实现如上所述的图像处理方法。In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor and a memory, the processor is configured to execute an image processing method program stored in the memory, so as to implement the above image processing method.
第四方面,本公开实施例还提供一种存储介质,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上所述的图像处理方法。In the fourth aspect, the embodiments of the present disclosure further provide a storage medium, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors, so as to realize the above-mentioned image processing method.
附图说明Description of drawings
图1为本公开实施例所提供的一个图像处理方法的流程示意图;FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure;
图2为本公开实施例所提供的另一个图像处理方法的流程示意图;FIG. 2 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure;
图3为本公开实施例所提供的一个图像处理装置的结构示意图;以及FIG. 3 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure; and
图4为本公开实施例所提供的一个电子设备的结构示意图;FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure;
以上图中:In the picture above:
10、图像获取模块;20、场景确定模块;30、对象确定模块;40、对象处理模块;10. Image acquisition module; 20. Scene determination module; 30. Object determination module; 40. Object processing module;
400、电子设备;401、处理器;402、存储器;4021、操作***;4022、应用程序;403、用户接口;404、网络接口;405、总线***。400. Electronic equipment; 401. Processor; 402. Memory; 4021. Operating system; 4022. Application program; 403. User interface; 404. Network interface; 405. Bus system.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Obviously, the described embodiments It is a part of the embodiments of the present disclosure, but not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present disclosure.
为便于对本公开实施例的理解,下面将结合附图以具体实施例做进一步的解释说明,实施例并不构成对本公开实施例的限定。In order to facilitate the understanding of the embodiments of the present disclosure, the following will further explain and illustrate with specific embodiments in conjunction with the accompanying drawings, and the embodiments are not intended to limit the embodiments of the present disclosure.
目前,相关技术中,通常在用户拍摄完成之后,采用第三方软件对图像和或视频进行处理,以抹除图像或视频中包含的个人隐私的内容。但通过上述方式对隐私进行处理,不仅使用不便捷且可能会忽略一些信息。若原始图片被他人获取,个人隐私信息也会泄露,不能有效的保护个人隐私。At present, in related technologies, usually after the user finishes shooting, the images and/or videos are processed by using third-party software, so as to erase personal privacy content contained in the images or videos. However, processing privacy in the above-mentioned way is not only inconvenient to use but also may ignore some information. If the original picture is obtained by others, personal privacy information will also be leaked, which cannot effectively protect personal privacy.
参考图1,图1为本公开实施例提供的一个图像处理方法的流程示意图。本公开实施例提供的一种图像处理方法,包括步骤S10至步骤S50。Referring to FIG. 1 , FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure. An image processing method provided by an embodiment of the present disclosure includes step S10 to step S50.
S10:获取第一目标图像。S10: Acquire a first target image.
本实施例中,第一目标图像为用户通过终端的照相机所采集的,其中,终端包括但不限于手机、平板电脑等。用户打开终端的照相机,采用照相机进行拍摄以获取第一目标图像。In this embodiment, the first target image is collected by the user through the camera of the terminal, where the terminal includes but is not limited to a mobile phone, a tablet computer, and the like. The user turns on the camera of the terminal, and takes pictures with the camera to acquire the first target image.
S20:确定与第一目标对象对应的第一场景。S20: Determine a first scene corresponding to the first target object.
本实施例中,第一场景为预设的隐私保护场景,每一隐私保护场景中对应预设有保护对象标签。本实施例中,隐私保护场景可为自拍场景、会议场景及家居场景等,其中,隐私保护场景可根据实际需要进行设置,本实施例中不做具体的限定。In this embodiment, the first scene is a preset privacy protection scene, and each privacy protection scene is correspondingly preset with a protection object tag. In this embodiment, the privacy protection scene may be a selfie scene, a meeting scene, a home scene, etc., wherein the privacy protection scene may be set according to actual needs, and no specific limitation is set in this embodiment.
保护对象标签至少包括如下之一:人体标签、人脸标签、文字标签、宠物标签、家具标签、汽车标签、声音标签及位置标签,当然,保护对象标签也可以根据实际需要进行设置。当保护对象标签为人体标签时,所需保护的对象为整体人形区域,包括肢体、头部、服饰及饰品等;当保护对象标签为人脸标签时,所需保护的对象为保护近距离自拍中表现出的虹膜等生物信息;当保护对象标签为文字标签时,所需保护的对象为图像中的文字信息,例如演示文稿中的内容、书写板书及笔记等文字信息;当为保护对象标签宠物标签时,所需保护的对象为图像中所出现的动物,例如猫、狗等动物;当保护对象标签为家具标签时,所需保护的对象为图像中所出现的沙发、装饰画等家具;当保护对象标签为汽车标签时,所需保护的对象为图像所出现的小轿车、公交车、大货车的汽车;当保护对象标签为声音标签时,对语音类进行保护,例如对拍摄视频时预设关闭录音权限,或拍摄视频结束后对视频中所涉及的音频进行消音处理;当保护对象标签为位置标签时,对位置信息的内容进行保护,例如对拍摄图像或视频时预设关闭位置权限,或拍摄图像或视频结束后对图像或视频中所涉及的位置信息进行消除处理。The protected object tags include at least one of the following: human body tags, face tags, text tags, pet tags, furniture tags, car tags, sound tags, and location tags. Of course, the protected object tags can also be set according to actual needs. When the protection object label is a human body label, the object to be protected is the overall human-shaped area, including limbs, head, clothing, and accessories; Biological information such as irises shown; when the protection object label is a text label, the object to be protected is the text information in the image, such as the content in the presentation, writing on the blackboard and notes and other text information; when the protection object label is pet When labeling, the objects to be protected are animals that appear in the image, such as cats, dogs and other animals; when the object to be protected is a furniture label, the objects to be protected are furniture such as sofas and decorative paintings that appear in the image; When the protection object label is a car label, the objects to be protected are cars, buses, and large trucks that appear in the image; when the protection object label is a sound label, the voice class is protected, for example, when shooting a video Turn off the recording authority by default, or mute the audio involved in the video after the video is shot; when the protected object tag is a location tag, protect the content of the location information, such as preset to turn off the location when shooting images or videos permission, or delete the location information involved in the image or video after the image or video is taken.
本实施例中,当隐私保护场景为自拍场景时,与自拍场景相对应的保护对象标签可为人脸标签和饰品标签等;当隐私保护场景为会议场景时,与会议场景相对应的保护对象标签可为人体标签、文字标签及声音标签等;In this embodiment, when the privacy protection scene is a selfie scene, the protection object tags corresponding to the selfie scene can be face tags and jewelry tags, etc.; when the privacy protection scene is a meeting scene, the protection object tags corresponding to the meeting scene It can be human body label, text label and sound label, etc.;
当隐私保护场景为家居场景时,与家居场景相对应的保护对象标签可为人体标签、饰品标签、家具标签、宠物标签及位置标签等。其中,每一隐私保护场景所预设的保护对象标签可根据实际情况进行设定。When the privacy protection scene is a home scene, the protection object tags corresponding to the home scene can be human body tags, jewelry tags, furniture tags, pet tags, and location tags. Wherein, the preset protection object label for each privacy protection scene can be set according to the actual situation.
本实施例中,S20步骤中的确定与第一目标图像对应的第一场景,包括:接收输入的表征第一场景的场景信息;以及根据场景信息确定第一场景。In this embodiment, determining the first scene corresponding to the first target image in step S20 includes: receiving input scene information representing the first scene; and determining the first scene according to the scene information.
本实施例中用户可通过终端输入场景信息,例如用户可通过触控、语音控制等方式输入场景信息,当接收到场景信息时,对场景信息进行处理即可得到与场景信息对应的第一场景。其中,用户可在拍摄图像结束后或在拍摄图像时输入场景信息。当用户在拍摄图像时输入场景信息时,可根据如下方式确定第一场景,如下:获取第二场景;关闭与第二场景对应的权限对象;以及基于第二场景,确定与所获取的第一目标图像对应的第一场景。In this embodiment, the user can input the scene information through the terminal, for example, the user can input the scene information through touch control, voice control, etc., and when the scene information is received, the scene information can be processed to obtain the first scene corresponding to the scene information . Wherein, the user may input scene information after the image is captured or when the image is captured. When the user inputs scene information when taking an image, the first scene can be determined in the following manner, as follows: acquire the second scene; close the permission object corresponding to the second scene; The first scene corresponding to the target image.
本实施例中,可根据接收到的表征第二场景的场景信息以获取到第二场景。权限对象可为声音对象或位置对象。当第二场景中不包括权限对象时,则无需关闭与第二场景对应的权限对象。本实施例中的第二场景与第一场景相一致。例如当第二场景为家居场景时,用户拍摄图像时,输入场景信息以获取与场景信息对应的第二场景,关闭与第二场景对应的位置对象。当关闭相应权限对象后,获取第一目标图像,并根据第二场景对第一目标图像进行处理。In this embodiment, the second scene may be acquired according to the received scene information representing the second scene. A rights object can be a sound object or a location object. When the second scenario does not include the permission object, there is no need to close the permission object corresponding to the second scenario. The second scenario in this embodiment is consistent with the first scenario. For example, when the second scene is a home scene, when the user takes an image, input scene information to obtain the second scene corresponding to the scene information, and close the location object corresponding to the second scene. After the corresponding authority object is closed, the first target image is acquired, and the first target image is processed according to the second scene.
本实施例中,S20步骤中的第一场景的确定方法还可通过以下方式确定,如下:获取第一目标图像的目标图像特征数据;以及基于图像特征数据与场景模型的对应关系,确定与目标图像特征数据对应的第一场景。In this embodiment, the method for determining the first scene in step S20 can also be determined in the following manner, as follows: acquiring the target image feature data of the first target image; The first scene corresponding to the image feature data.
本实施例中,场景模型可通过机器学习图像中的图像特征数据以建立相应的场景模型,可通过神经网络学习图像特征数据以建立相应的场景模型,场景模型包括多个不同的隐私保护场景,即当第一目标图像的目标图像特征数据与场景模型的图像特征数据匹配成功时,可根据目标图像特征数据确定第一场景。In this embodiment, the scene model can establish a corresponding scene model through machine learning of image feature data in an image, and can learn image feature data through a neural network to establish a corresponding scene model. The scene model includes a plurality of different privacy protection scenes, That is, when the target image feature data of the first target image is successfully matched with the image feature data of the scene model, the first scene can be determined according to the target image feature data.
本实施例中,S20步骤中的第一场景的确定方法还可通过以下方式确定,如下:接收输入的表征自定义场景的场景信息;根据场景信息确定自定义场景;基于场景配置界面,接收输入的表征与自定义场景对应的保护对象标签的标签信息;根据标签信息确定保护对象标签;基于保护对象标签配置自定义场景;以及基于配置后的自定义场景确定与第一目标图像对应的第一场景。In this embodiment, the method for determining the first scene in step S20 can also be determined in the following manner, as follows: receiving input scene information representing a custom scene; determining a custom scene according to the scene information; based on the scene configuration interface, receiving an input The tag information representing the protected object tag corresponding to the custom scene; determine the protected object tag according to the tag information; configure the custom scene based on the protected object tag; and determine the first target image corresponding to the first target image based on the configured custom scene Scenes.
本实施例中,与自定义场景对应的场景信息可在拍摄图像时输入,也可以在拍摄图像结束后输入,即,与自定义场景对应的保护对象标签的标签信息可根据实际需要进行输入。In this embodiment, the scene information corresponding to the custom scene can be input when the image is captured, or after the image capture is completed, that is, the label information of the protected object label corresponding to the custom scene can be input according to actual needs.
本实施例中,需要说明的是,当确定为自定义场景后,会弹出场景配置界面,基于场景界面,接收用户输入的表征保护对象标签的标签信息,从而根据与标签信息对应的保护对象标签配置自定义场景。当配置完自定义场景后,存储自定义场景,并将自定义场景作为预设的隐私保护场景,以方便用户日后对目标图像的处理。In this embodiment, what needs to be explained is that when a custom scene is determined, a scene configuration interface will pop up. Based on the scene interface, the label information representing the tag of the protected object input by the user is received, and the protected object tag corresponding to the tag information Configure custom scenarios. After configuring the custom scene, store the custom scene, and use the custom scene as a preset privacy protection scene, so as to facilitate the user to process the target image in the future.
S30:从第一目标图像中确定与第一场景对应的待处理的目标对象。S30: Determine a target object to be processed corresponding to the first scene from the first target image.
本实施例中,S30步骤包括:基于第一场景,确定与第一场景对应的保护对象标签;以及从第一目标图像中确定与保护对象标签对应的待处理的目标对象。In this embodiment, step S30 includes: based on the first scene, determining a protected object label corresponding to the first scene; and determining a target object to be processed corresponding to the protected object label from the first target image.
本实施例中,待处理的目标对象即为第一场景中与保护对象标签对应的保护对象,待处理的目标对象为可见的。例如当第一场景为家居场景时,待处理的目标对象可为与人体标签、饰品标签、家具标签及宠物标签等相对应的保护对象。In this embodiment, the target object to be processed is the protected object corresponding to the protected object label in the first scene, and the target object to be processed is visible. For example, when the first scene is a home scene, the target object to be processed may be a protection object corresponding to a human body tag, an accessory tag, a furniture tag, and a pet tag.
本实施例中,可采用图像处理的语义分割技术对第一目标图像进行识别,以得到第一目标图像中与第一场景对应的待处理的目标对象。In this embodiment, the semantic segmentation technology of image processing may be used to identify the first target image, so as to obtain the target object to be processed corresponding to the first scene in the first target image.
本实施例中,当第一场景为自定义场景时,若识别出与待处理的目标对象相对应的保护对象标签与某一场景相对应的保护对象标签的重合度大于预设阈值时,则触发更换提示,当接收到输入的更换提示时,将自定义场景更换为与其重合度最高的场景,并基于更换后的场景,从第一目标图像中确定与更换后的场景对应的待处理的目标对象;当未接收到输入的更换提示时,则执行S40步骤。其中,预设阈值可根据实际需要进行设定,本实施例中不做具体的限定。通过以上方式,防止了用户遗漏所要保护的目标对象,便于用户的使 用。In this embodiment, when the first scene is a custom scene, if it is identified that the overlap between the protected object label corresponding to the target object to be processed and the protected object label corresponding to a certain scene is greater than the preset threshold, then Trigger the replacement prompt. When the input replacement prompt is received, replace the custom scene with the scene with the highest overlap with it, and based on the replaced scene, determine the corresponding to-be-processed scene corresponding to the replaced scene from the first target image. The target object; when the input replacement prompt is not received, step S40 is executed. Wherein, the preset threshold can be set according to actual needs, and is not specifically limited in this embodiment. Through the above method, the user is prevented from missing the target object to be protected, which is convenient for the user to use.
S40:对第一目标图像中的目标对象进行处理,以使处理后的第一目标图像与处理前的第一目标图像存在差异。S40: Process the target object in the first target image, so that there is a difference between the processed first target image and the unprocessed first target image.
本实施例中,需要说明的是,处理后的第一目标图像与处理前的第一目标图像存在差异是指,处理后的第一目标图像已将目标对象掩盖住。In this embodiment, it should be noted that the difference between the processed first target image and the unprocessed first target image means that the processed first target image has covered the target object.
本实施例中,S40步骤中对第一目标图像中的目标对象进行处理,包括:对第一目标图像中的目标对象进行掩盖处理。In this embodiment, processing the target object in the first target image in step S40 includes: performing masking processing on the target object in the first target image.
本实施例中,对第一目标图像中的目标对象进行掩盖处理,包括:对目标对象进行模糊处理;或在目标对象上添加预设元素以覆盖目标对象。预设元素包括以下元素中的至少一者:纹理、图片、白板和马赛克。In this embodiment, performing masking processing on the target object in the first target image includes: performing blurring processing on the target object; or adding preset elements on the target object to cover the target object. The preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
本实施例中,图片可为卡通图片等已预设好的图片,预设元素可根据实际需要进行设定。In this embodiment, the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
本实施例中作为一个示例,介绍一下对目标对象进行模糊处理以得到处理后的第一目标图像的方法。In this embodiment, as an example, a method for blurring a target object to obtain a processed first target image is introduced.
假设第一目标图像中有N个待处理的目标对象,则采用语义分割技术生成对应的Mask(掩膜):Assuming that there are N target objects to be processed in the first target image, the corresponding Mask (mask) is generated using semantic segmentation technology:
Figure PCTCN2022125963-appb-000001
Figure PCTCN2022125963-appb-000001
上式中,i表示为待处理的目标对象的顺序编号;Mask i(x,y)表示为第i个待处理的目标对象的掩膜;x表示为第i个待处理的目标对象中的像素的行坐标;y表示为第i个待处理的目标对象中的像素的列坐标。 In the above formula, i represents the sequence number of the target object to be processed; Mask i (x, y) represents the mask of the i-th target object to be processed; x represents the i-th target object to be processed The row coordinate of the pixel; y represents the column coordinate of the pixel in the i-th target object to be processed.
将待处理的目标对象的区域进行模糊处理:Blur the area of the target object to be processed:
模糊卷积核为:
Figure PCTCN2022125963-appb-000002
The fuzzy convolution kernel is:
Figure PCTCN2022125963-appb-000002
Figure PCTCN2022125963-appb-000003
Figure PCTCN2022125963-appb-000003
上式中,F(i,j)表示为k*k大小的模糊卷积核;x ij表示为模糊卷积核第i行第j列数值,其中,x,j≤k,k=2r+1,r表示为模糊处理的半径,δ表示为可调的模糊参数。 In the above formula, F(i, j) is represented as a fuzzy convolution kernel of k*k size; x ij is represented as the value of row i, column j of fuzzy convolution kernel, where x, j≤k, k=2r+ 1, r represents the radius of blur processing, and δ represents an adjustable blur parameter.
模糊处理后,即可得到背景图:After blurring, the background image can be obtained:
Figure PCTCN2022125963-appb-000004
Figure PCTCN2022125963-appb-000004
上式中bgimg(x,y)表示为在像素点(x,y)处的背景数据;srcimg(x+i,y+j)表示为处理前的第一目标图像中像素点(x,y)的邻域内半径为r内的数据。In the above formula, bgimg(x, y) is represented as the background data at the pixel point (x, y); srcimg(x+i, y+j) is represented as the pixel point (x, y) in the first target image before processing ) within the radius of the neighborhood is the data within r.
将背景图与处理前的第一目标图像融合后则可得处理后的第一目标图像:After the background image is fused with the first target image before processing, the processed first target image can be obtained:
dastimg(x,y)=(1-Mask i(x,y))*srcimg(x,y)+Mask i(x,y)*bgimg(x,y) dastimg(x,y)=(1-Mask i (x,y))*srcimg(x,y)+Mask i (x,y)*bgimg(x,y)
上式中,dastimg(x,y)表示为模糊处理后的第一目标图像中像素点(x,y)处的图像数值。In the above formula, dastimg(x, y) represents the image value at the pixel point (x, y) in the blurred first target image.
S50:接收输入的信息清除请求,基于信息清除请求,清除处理后的第一目标图像的属性信息。S50: Receive an input information clearing request, and clear the processed attribute information of the first target image based on the information clearing request.
本实施例中,属性信息可包括随机生成的时间信息、随机生成的位置信息、终端信息、曝光参数信息等。其中,清除处理后的第一目标图像的属性信息可为将处理后的第一图像的属性信息完全清除,也可为一随机数替代处理后的第一图像的属性信息。In this embodiment, the attribute information may include randomly generated time information, randomly generated location information, terminal information, exposure parameter information, and the like. Wherein, clearing the attribute information of the first target image after processing may completely remove the attribute information of the processed first image, or may replace the attribute information of the processed first image with a random number.
本实施例中,需要说明的是,当执行完步骤S50之后,退出终端的照相机。In this embodiment, it should be noted that, after step S50 is performed, exit the camera of the terminal.
本实施例提供的一种图像处理方法,根据与图像相对应的场景,对图像中与场景相对应的目标对象和权限对象进行处理,以保护图像中所涉及到用户个人隐私的内容,从而实现了在原始图像中杜绝了用户个人隐私泄露。An image processing method provided in this embodiment processes the target object and authority object corresponding to the scene in the image according to the scene corresponding to the image, so as to protect the content related to the user's personal privacy in the image, thereby realizing In order to prevent the user's personal privacy from leaking in the original image.
参考图2,图2是本公开实施例提供的另一个图像处理方法的流程示意图。本公开实施例提供的一种图像处理方法,包括步骤S11至步骤S61。Referring to FIG. 2 , FIG. 2 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure. An image processing method provided by an embodiment of the present disclosure includes step S11 to step S61.
S11:获取视频流数据。S11: Obtain video stream data.
本实施例中,视频流数据为用户通过终端的照相机所采集的,其中,终端包括但不限于手机、平板电脑等。用户打开终端的照相机,采用照相机进行拍摄以获取视频流数据。In this embodiment, the video stream data is collected by the user through the camera of the terminal, where the terminal includes but is not limited to a mobile phone, a tablet computer, and the like. The user turns on the camera of the terminal, and uses the camera to take pictures to obtain video stream data.
S21:对视频流数据进行分帧处理,得到多个处理前的第一目标图像。S21: Perform frame division processing on the video stream data to obtain a plurality of first target images before processing.
本实施例中,按照时序关系对视频流数据进行分帧处理,得到多个按照时间时序关系的处理前的第一目标图像。In this embodiment, the video stream data is divided into frames according to the time sequence relationship to obtain a plurality of first target images before processing according to the time sequence relationship.
S31:确定与多个处理前的第一目标图像对应的第一场景。S31: Determine a first scene corresponding to a plurality of first target images before processing.
本实施例中,第一场景为预设的隐私保护场景,隐私保护场景与上述所描述一致,本实施例在此不做赘述。In this embodiment, the first scene is a preset privacy protection scene, and the privacy protection scene is consistent with the above description, so this embodiment will not repeat it here.
本实施例中,S31步骤中确定与多个第一目标图像对应的第一场景,包括:接收输入的表征第一场景的场景信息;以及根据场景信息确定第一场景。In this embodiment, determining the first scene corresponding to the plurality of first target images in step S31 includes: receiving input scene information representing the first scene; and determining the first scene according to the scene information.
本实施例中用户可通过终端输入场景信息,例如用户可通过触控、语音控制等方式输入场景信息,当接收到场景信息时,对场景信息进行处理即可得到与场景信息对应的第一场景。其中,用户可在拍摄视频结束后或在拍摄视频时输入场景信息。当用户在拍摄视频时输入场景信息时,可根据如下方式确定第一场景:获取第二场景;关闭与第二场景对应的权限对象;以及基于第二场景,确定与所获取的第一目标图像对应的第一场景。In this embodiment, the user can input the scene information through the terminal, for example, the user can input the scene information through touch control, voice control, etc., and when the scene information is received, the scene information can be processed to obtain the first scene corresponding to the scene information . Wherein, the user may input scene information after shooting the video or when shooting the video. When the user inputs scene information when shooting a video, the first scene can be determined in the following manner: obtain the second scene; close the permission object corresponding to the second scene; and based on the second scene, determine the obtained first target image corresponding to the first scene.
本实施例中,可根据接收到的表征第二场景的场景信息以获取到第二场景。权限对象可为声音保护标签或位置保护标签。当第二场景中不包括权限对象时,则无需关闭与第二场景对应的权限对象。本实施例中的第二场景与第一场景相一致。例如当第二场景为家居场景时,用户拍摄视频时,接收输入场景信息以获取与场景信息对应的第二场景,关闭与第二场景对应的位置对象。当关闭相应权限对象后,获取视频流数据,并对视频流数据进行分帧处理,获取到多个处理前的第一目标图像,根据第二场景对多个处理前的第一目标图像进行处理。In this embodiment, the second scene may be acquired according to the received scene information representing the second scene. The rights object can be a sound protection tag or a location protection tag. When the second scenario does not include the permission object, there is no need to close the permission object corresponding to the second scenario. The second scenario in this embodiment is consistent with the first scenario. For example, when the second scene is a home scene, when the user shoots a video, the user receives the input scene information to obtain the second scene corresponding to the scene information, and closes the location object corresponding to the second scene. After the corresponding permission object is closed, the video stream data is obtained, and the video stream data is divided into frames, and multiple first target images before processing are obtained, and multiple first target images before processing are processed according to the second scene .
当用户在拍摄视频结束后输入场景时,可根据如下方式确定第一场景:获取第三场景;对所获取的视频流数据中与第三场景相对应的预设音频进行处理;对处理预设音频后的视频流数据进行分帧处理,得到多个处理前的第一目标图像;以及基于第三场景,确定与多个处理前的第一目标图像对应的第一场景。When the user enters the scene after shooting the video, the first scene can be determined according to the following methods: obtain the third scene; process the preset audio corresponding to the third scene in the obtained video stream data; process the preset The video stream data after the audio is divided into frames to obtain a plurality of first target images before processing; and based on the third scene, a first scene corresponding to the plurality of first target images before processing is determined.
本实施例中,当用户拍摄视频结束后,可根据接收到的表征第三场景的场景信息以获取到第三场景。预设音频可指视频流数据中所包含的音频对象。当第三场景中不包括预设音频时,则无需对第三场景对应的预设音频进行处理。本实施例中的第三场景与第一场景相一致。其中,对视频中预设音频进行处理可指,将视频流数据中的图像数据与音频数据进行分离,从而除去音频数据,以达到对视频消音的目的。In this embodiment, after the user finishes shooting the video, the third scene can be acquired according to the received scene information representing the third scene. The preset audio may refer to an audio object included in the video stream data. When the third scene does not include the preset audio, there is no need to process the preset audio corresponding to the third scene. The third scenario in this embodiment is consistent with the first scenario. Wherein, processing the preset audio in the video may refer to separating the image data and audio data in the video stream data, so as to remove the audio data, so as to achieve the purpose of muting the video.
本实施例中,通过对视频中与场景相对应的权限对象和预设音频进行处理,进一步加强了视频中个人隐私的保护。In this embodiment, the protection of personal privacy in the video is further strengthened by processing the rights object and preset audio corresponding to the scene in the video.
本实施例中,S31步骤中第一场景的确定方法还可通过以下方式确定:获取第一目标图像的目标图像特征数据;以及基于图像特征数据与场景模型的对应关系,确定与目标图 像特征数据对应的第一场景。In this embodiment, the method for determining the first scene in step S31 can also be determined in the following manner: acquiring the target image feature data of the first target image; corresponding to the first scene.
其中,场景模型的确定方法与上述一致,本实施例在此不做赘述。Wherein, the method for determining the scene model is consistent with the above, and will not be described in detail here in this embodiment.
本实施例中,第一场景的确定方法还可通过以下方式确定:接收输入的表征自定义场景的场景信息;根据场景信息确定自定义场景;基于场景配置界面,接收输入的表征与自定义场景对应的保护对象标签的标签信息;根据标签信息确定保护对象标签;基于保护对象标签配置自定义场景;以及基于配置后的自定义场景确定与第一目标图像对应的第一场景。In this embodiment, the method for determining the first scene can also be determined in the following manner: receiving the input scene information representing the custom scene; determining the custom scene according to the scene information; based on the scene configuration interface, receiving the input representation and the custom scene Tag information of the corresponding protected object tag; determine the protected object tag according to the tag information; configure a custom scene based on the protected object tag; and determine a first scene corresponding to the first target image based on the configured custom scene.
本实施例中,与自定义场景对应的场景信息可在拍摄时输入,也可以在获取到视频流数据时输入,即,与自定义场景对应的保护对象标签的标签信息可根据实际需要进行输入。In this embodiment, the scene information corresponding to the custom scene can be input when shooting, and can also be input when the video stream data is obtained, that is, the label information of the protected object label corresponding to the custom scene can be input according to actual needs .
本实施例中,需要说明的是,当确定为自定义场景后,会弹出场景配置界面,基于场景界面,接收用户输入的表征保护对象标签的标签信息,从而根据与标签信息对应的保护对象标签配置自定义场景。当配置完自定义场景后,存储自定义场景,并将自定义场景作为预设的隐私保护场景,以方便用户日后对目标图像的处理。In this embodiment, what needs to be explained is that when a custom scene is determined, a scene configuration interface will pop up. Based on the scene interface, the label information representing the tag of the protected object input by the user is received, and the protected object tag corresponding to the tag information Configure custom scenarios. After configuring the custom scene, store the custom scene, and use the custom scene as a preset privacy protection scene, so as to facilitate the user to process the target image in the future.
S41:从每一处理前的第一目标图像中确定与第一场景对应的待处理的目标对象。S41: Determine a target object to be processed corresponding to the first scene from each pre-processed first target image.
本实施例中,S41步骤包括:基于第一场景,确定与第一场景对应的保护对象标签;以及从每一处理前的第一目标图像中确定与保护对象标签对应的待处理的目标对象。In this embodiment, step S41 includes: based on the first scene, determining the protected object label corresponding to the first scene; and determining the target object to be processed corresponding to the protected object label from each pre-processed first target image.
可采用图像处理的语义分割技术对每一处理前的第一目标图像进行识别,以得到每一处理器前的第一目标图像中与第一场景中保护对象标签对应的待处理的目标对象。待处理的目标对象为可见的。The semantic segmentation technology of image processing may be used to identify each pre-processed first target image to obtain the target object to be processed corresponding to the label of the protected object in the first scene in the first target image before each processor. The pending target object is visible.
S51:按照分帧处理得到的多个处理前的第一目标图像的时序关系,依次对每个处理前的第一目标图像的目标对象进行处理,得到多个处理后的第一目标图像。S51: Process the target objects of each unprocessed first target image sequentially according to the time sequence relationship of the multiple unprocessed first target images obtained through the frame division processing, to obtain multiple processed first target images.
本实施例中,S51步骤中对处理前的第一目标图像中的目标对象进行处理,包括:对第一目标图像中的目标对象进行掩盖处理。In this embodiment, processing the target object in the first target image before processing in step S51 includes: performing masking processing on the target object in the first target image.
本实施例中,S51步骤中对第一目标图像中的目标对象进行掩盖处理,包括:对目标对象进行模糊处理;或在目标对象上添加预设元素以覆盖目标对象。预设元素包括以下元素中的至少一者:纹理、图片、白板和马赛克。In this embodiment, performing masking processing on the target object in the first target image in step S51 includes: performing blurring processing on the target object; or adding preset elements on the target object to cover the target object. The preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
本实施例中,图片可为卡通图片等已预设好的图片,预设元素可根据实际需要进行设定。In this embodiment, the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
本实施例中,需要说明的是,得到多个处理后的第一目标图像后,若接收到输入的信息清除请求,基于信息清除请求依次清除多个处理后的第一目标图像的属性信息。In this embodiment, it should be noted that after obtaining multiple processed first target images, if an input information clearing request is received, the attribute information of multiple processed first target images is cleared sequentially based on the information clearing request.
其中,属性信息可包括随机生成的时间信息、随机生成的位置信息、终端信息、曝光参数信息等。其中,清除处理后的第一目标图像的属性信息可为将处理后的第一图像的属 性信息完全清除,也可为一随机数替代处理后的第一图像的属性信息。Wherein, the attribute information may include randomly generated time information, randomly generated location information, terminal information, exposure parameter information, and the like. Wherein, clearing the attribute information of the first target image after processing may completely remove the attribute information of the processed first image, or may replace the attribute information of the processed first image with a random number.
S61:基于时序关系,将多个处理后的第一目标图像生成处理后的视频流数据,以使处理后的视频流数据与处理前的视频流数据存在差异。S61: Generate processed video stream data from a plurality of processed first target images based on a time sequence relationship, so that there is a difference between the processed video stream data and the unprocessed video stream data.
本实施例中,基于时序关系,对多个处理后的第一目标图像进行合帧处理后,即可生成处理后的视频流数据。处理后的视频流数据与处理前的视频流数据存在差异指,处理后的视频流数据已将目标对象掩盖住。In this embodiment, based on the time sequence relationship, the processed video stream data can be generated after the multiple processed first target images are combined into frames. The difference between the processed video stream data and the pre-processed video stream data means that the processed video stream data has covered the target object.
本实施例中,需要说明的是,当执行完步骤S61之后,退出终端的照相机。In this embodiment, it should be noted that, after step S61 is performed, exit the camera of the terminal.
本实施例提供的一种图像处理方法,根据与视频相对应的场景,对视频的图像中与场景相对应的目标对象、权限对象和预设音频进行处理,以保护视频中所涉及到用户个人隐私的内容,从而实现了在原始视频中杜绝了用户个人隐私泄露。An image processing method provided in this embodiment processes the target object, authority object and preset audio corresponding to the scene in the video image according to the scene corresponding to the video, so as to protect the personal users involved in the video Privacy content, so as to prevent the user's personal privacy from leaking in the original video.
参考图3,图3为本公开实施例提供的一个图像处理装置的结构示意图。本公开实施例提供的一个图像处理装置,包括图像获取模块10、场景确定模块20、对象确定模块30及对象处理模块40,其中,图像获取模块10用于获取第一目标对象,场景确定模块20用于确定与第一目标图像对应的第一场景,对象确定模块30用于从第一目标图像中确定与第一场景对应的待处理的目标对象,对象处理模块40用于对第一目标图像中的目标对象进行处理,以使处理后的第一目标图像与处理后的第一目标图像存在差异。Referring to FIG. 3 , FIG. 3 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure. An image processing device provided by an embodiment of the present disclosure includes an image acquisition module 10, a scene determination module 20, an object determination module 30, and an object processing module 40, wherein the image acquisition module 10 is used to acquire a first target object, and the scene determination module 20 For determining the first scene corresponding to the first target image, the object determination module 30 is used for determining the target object to be processed corresponding to the first scene from the first target image, and the object processing module 40 is used for processing the first target image The target object in is processed so that there is a difference between the processed first target image and the processed first target image.
本实施例中,对象确定模块30还用于基于第一场景,确定与第一场景对应的保护对象标签,从第一目标图像中确定与保护对象标签对应的待处理的目标对象。In this embodiment, the object determination module 30 is further configured to determine the protected object label corresponding to the first scene based on the first scene, and determine the target object to be processed corresponding to the protected object label from the first target image.
本实施例中,保护对象标签至少包括如下之一:人体标签、人脸标签、文字标签、宠物标签、家具标签、汽车标签、声音标签和位置标签。In this embodiment, the protected object tags include at least one of the following: human body tags, face tags, text tags, pet tags, furniture tags, car tags, sound tags, and location tags.
本实施例中,场景确定模块20还用于接收输入的表征第一场景的场景信息;根据场景信息确定第一场景。In this embodiment, the scene determining module 20 is further configured to receive input scene information representing the first scene; determine the first scene according to the scene information.
本实施例中,场景确定模块20还用于获取第一目标图像的目标图像特征数据,基于图像特征数据与场景模型的对应关系,确定与目标图像特征数据对应的第一场景。In this embodiment, the scene determination module 20 is further configured to acquire target image feature data of the first target image, and determine the first scene corresponding to the target image feature data based on the correspondence between the image feature data and the scene model.
本实施例中,对象处理模块40还用于对第一目标图像中的目标对象进行掩盖处理,以实现对第一目标图像中的目标对象进行处理。In this embodiment, the object processing module 40 is further configured to perform masking processing on the target object in the first target image, so as to implement processing on the target object in the first target image.
本实施例中,需要说明的是,掩盖处理,至少包括如下之一:对目标对象进行模糊处理;或在目标对象上添加预设元素以覆盖目标对象。预设元素包括以下元素中的至少一者:纹理、图片、白板和马赛克。In this embodiment, it should be noted that the masking process includes at least one of the following: blurring the target object; or adding preset elements on the target object to cover the target object. The preset elements include at least one of the following elements: texture, picture, whiteboard, and mosaic.
其中,图片可为卡通图片等预设好的图片,预设元素可根据实际需要进行设定。Wherein, the picture may be a preset picture such as a cartoon picture, and the preset elements may be set according to actual needs.
本实施例中,图像获取模块10还用于获取视频流数据,对视频流数据进行分帧处理,得到多个处理前的第一目标图像。In this embodiment, the image acquisition module 10 is further configured to acquire video stream data, and process the video stream data into frames to obtain a plurality of first target images before processing.
本实施例中,对象处理模块40还用于按照分帧处理得到的多个处理前的第一目标图像的时序关系,依次对每个处理前的第一目标图像的目标对象进行处理,得到多个处理后的第一目标图像;基于所述时序关系,将多个处理后的第一目标图像生成处理后的视频流数据,以使处理后的视频流数据与处理前的视频流数据存在差异。In this embodiment, the object processing module 40 is further configured to sequentially process the target objects of each pre-processed first target image according to the time sequence relationship of the multiple pre-processed first target images obtained through frame division processing, to obtain multiple A processed first target image; based on the timing relationship, a plurality of processed first target images are generated into processed video stream data, so that there is a difference between the processed video stream data and the pre-processed video stream data .
本实施例中,场景确定模块20还用于获取第二场景,基于第二场景,关闭与第二场景对应的权限对象,基于第二场景,确定与所获取的第一目标图像对应的第一场景。In this embodiment, the scene determination module 20 is also used to obtain a second scene, close the authority object corresponding to the second scene based on the second scene, and determine the first object corresponding to the acquired first target image based on the second scene. Scenes.
本实施例中,场景确定模块20还用于获取第三场景,对所获取的视频流数据中与第三场景对应的预设音频进行处理,对处理预设音频后的视频流数据进行分帧处理,得到多个处理前的第一目标图像,基于第三场景,确定多个处理前的第一目标图像对应的第一场景。In this embodiment, the scene determination module 20 is also used to obtain the third scene, process the preset audio corresponding to the third scene in the obtained video stream data, and divide the video stream data after the preset audio is processed into frames and processing to obtain a plurality of first target images before processing, and determine a first scene corresponding to the plurality of first target images before processing based on the third scene.
本实施例提供的一种拍摄装置,根据与图像和视频相对应的场景,对图像和视频中与场景相对应的对象进行处理,以保护图像中所涉及到用户个人隐私的内容,从而实现了在原始图像中杜绝了用户个人隐私泄露。The shooting device provided in this embodiment processes the objects corresponding to the scenes in the images and videos according to the scenes corresponding to the images and videos, so as to protect the content related to the user's personal privacy in the images, thereby realizing In the original image, the leakage of user's personal privacy is eliminated.
图4为本公开实施例提供的一种的电子设备的结构示意图,图4所示的电子设备400包括:至少一个处理器401、存储器402、至少一个网络接口404和其他用户接口403。电子设备400中的各个组件通过总线***405耦合在一起。可理解,总线***405用于实现这些组件之间的连接通信。总线***405除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图4中将各种总线都标为总线***405。FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. The electronic device 400 shown in FIG. 4 includes: at least one processor 401 , a memory 402 , at least one network interface 404 and other user interfaces 403 . Various components in the electronic device 400 are coupled together through the bus system 405 . It can be understood that the bus system 405 is used to realize connection and communication between these components. In addition to the data bus, the bus system 405 also includes a power bus, a control bus and a status signal bus. However, for clarity of illustration, the various buses are labeled as bus system 405 in FIG. 4 .
其中,用户接口403可以包括显示器、键盘或者点击设备(例如,鼠标,轨迹球(trackball)、触感板或者触摸屏等。Wherein, the user interface 403 may include a display, a keyboard or a pointing device (for example, a mouse, a trackball (trackball), a touch panel or a touch screen, and the like.
可以理解,本公开实施例中的存储器402可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的存储器402旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory 402 in the embodiment of the present disclosure may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash. The volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (Static RAM, SRAM), Dynamic Random Access Memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM, SLDRAM ) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM). Memory 402 described herein is intended to include, but is not limited to, these and any other suitable types of memory.
在一些实施方式中,存储器402存储了如下的元素,可执行单元或者数据结构,或者 他们的子集,或者他们的扩展集:操作***4021和应用程序4022。In some implementations, the memory 402 stores the following elements, executable units or data structures, or their subsets, or their extended sets: an operating system 4021 and an application program 4022.
其中,操作***4021,包含各种***程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序4022,包含各种应用程序,例如媒体播放器(Media Player)、浏览器(Browser)等,用于实现各种应用业务。实现本公开实施例方法的程序可以包含在应用程序4022中。Among them, the operating system 4021 includes various system programs, such as framework layer, core library layer, driver layer, etc., for realizing various basic services and processing tasks based on hardware. The application program 4022 includes various application programs, such as a media player (Media Player), a browser (Browser), etc., and is used to realize various application services. Programs for implementing the methods of the embodiments of the present disclosure may be included in the application program 4022 .
在本公开实施例中,通过调用存储器402存储的程序或指令,可以是应用程序4022中存储的程序或指令,处理器401用于执行各方法实施例所提供的方法步骤,例如包括:获取第一目标图像;确定与第一目标图像对应的第一场景;从第一目标图像中确定与第一场景对应的待处理的目标对象;对第一目标图像中的目标对象进行处理,以使处理后的第一目标图像与处理前的第一目标图像存在差异。In this embodiment of the present disclosure, by calling the program or instruction stored in the memory 402, which may be the program or instruction stored in the application program 4022, the processor 401 is used to execute the method steps provided by each method embodiment, for example, including: obtaining the first A target image; determine the first scene corresponding to the first target image; determine the target object to be processed corresponding to the first scene from the first target image; process the target object in the first target image, so that the processing There are differences between the first target image after processing and the first target image before processing.
上述本公开实施例揭示的方法可以应用于处理器401中,或者由处理器401实现。处理器401可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器401中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器401可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件单元组合执行完成。软件单元可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器402,处理器401读取存储器402中的信息,结合其硬件完成上述方法的步骤。The methods disclosed in the foregoing embodiments of the present disclosure may be applied to the processor 401 or implemented by the processor 401 . The processor 401 may be an integrated circuit chip and has signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 401 or instructions in the form of software. The above-mentioned processor 401 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps and logic block diagrams disclosed in the embodiments of the present disclosure may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like. The steps of the methods disclosed in the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software units in the decoding processor. The software unit may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register. The storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402, and completes the steps of the above method in combination with its hardware.
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSPDevice,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。It should be understood that the embodiments described herein may be implemented by hardware, software, firmware, middleware, microcode or a combination thereof. For hardware implementation, the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing device (DSPDevice, DSPD), programmable logic Device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor, other electronic devices for performing the functions described in this disclosure unit or its combination.
对于软件实现,可通过执行本文所述功能的单元来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。For a software implementation, the techniques described herein are implemented by means of units that perform the functions described herein. Software codes can be stored in memory and executed by a processor. Memory can be implemented within the processor or external to the processor.
本实施例提供的电子设备可以是如图4中所示的电子设备,可执行如图1-2中图像处理方法的所有步骤,进而实现图1-2所示图像处理方法的技术效果,请参照图1-2相关描述, 为简洁描述,在此不作赘述。The electronic device provided in this embodiment may be the electronic device shown in Figure 4, which can perform all the steps of the image processing method shown in Figure 1-2, and then realize the technical effect of the image processing method shown in Figure 1-2, please Referring to the relevant descriptions in FIGS. 1-2 , for the sake of brevity, details are not repeated here.
本公开实施例还提供了一种存储介质(计算机可读存储介质)。这里的存储介质存储有一个或者多个程序。其中,存储介质可以包括易失性存储器,例如随机存取存储器;存储器也可以包括非易失性存储器,例如只读存储器、快闪存储器、硬盘或固态硬盘;存储器还可以包括上述种类的存储器的组合。The embodiment of the present disclosure also provides a storage medium (computer-readable storage medium). The storage medium here stores one or more programs. Wherein, the storage medium may include a volatile memory, such as a random access memory; the memory may also include a non-volatile memory, such as a read-only memory, a flash memory, a hard disk or a solid-state disk; the memory may also include the above-mentioned types of memory combination.
当存储介质中一个或者多个程序可被一个或者多个处理器执行,以实现上述在拍摄设备侧执行的图像处理方法。One or more programs in the storage medium can be executed by one or more processors, so as to realize the above image processing method executed on the shooting device side.
所述处理器用于执行存储器中存储的拍摄程序,以实现以下在拍摄设备侧执行的图像处理方法的步骤:获取第一目标图像;确定与第一目标图像对应的第一场景;从第一目标图像中确定与第一场景对应的待处理的目标对象;对第一目标图像中的目标对象进行处理,以使处理后的第一目标图像与处理前的第一目标图像存在差异。The processor is used to execute the shooting program stored in the memory, so as to realize the following steps of the image processing method executed on the shooting device side: acquiring the first target image; determining the first scene corresponding to the first target image; A target object to be processed corresponding to the first scene is determined in the image; and the target object in the first target image is processed so that there is a difference between the processed first target image and the unprocessed first target image.
本公开实施例提供的一种图像处理方法,包括:获取第一目标图像,确定与第一目标图像对应的第一场景,从第一目标图像中确定与第一场景对应的待处理的目标对象,对第一目标图像中的目标对象进行处理,以使处理后的第一目标图像与处理前的第一目标图像存在差异。本公开实施例根据与图像相对应的场景,对图像中与场景相对应的目标对象进行处理,以保护图像中所涉及到用户个人隐私的内容,从而实现了在原始图像中杜绝了用户个人隐私泄露。An image processing method provided by an embodiment of the present disclosure includes: acquiring a first target image, determining a first scene corresponding to the first target image, and determining a target object to be processed corresponding to the first scene from the first target image , processing the target object in the first target image, so that there is a difference between the processed first target image and the unprocessed first target image. According to the scene corresponding to the image, the embodiment of the present disclosure processes the target object corresponding to the scene in the image, so as to protect the content related to the user's personal privacy in the image, so as to prevent the user's personal privacy in the original image Give way.
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。Professionals should further realize that the units and algorithm steps described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the relationship between hardware and software Interchangeability. In the above description, the composition and steps of each example have been generally described according to their functions. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementation should not be considered beyond the scope of the present disclosure.
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of the methods or algorithms described in connection with the embodiments disclosed herein may be implemented by hardware, software modules executed by a processor, or a combination of both. Software modules can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other Any other known storage medium.
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、电路、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、电路、物品或者 设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、电路、物品或者设备中还存在另外的相同要素。It should be noted that in this article, relative terms such as "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these No such actual relationship or order exists between entities or operations. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, circuit, article or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of, or also elements inherent in, such a process, circuit, article, or device. Without further limitations, an element defined by the phrase "comprising a" does not exclude the presence of additional identical elements in the process, circuit, article or device comprising said element.
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。The above descriptions are only specific implementation manners of the present disclosure, so that those skilled in the art can understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present disclosure.
因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。Therefore, the present disclosure will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

  1. 一种图像处理方法,包括:An image processing method, comprising:
    获取第一目标图像;Obtain the first target image;
    确定与所述第一目标图像对应的第一场景;determining a first scene corresponding to the first target image;
    从所述第一目标图像中确定与所述第一场景对应的待处理的目标对象;以及determining a target object to be processed corresponding to the first scene from the first target image; and
    对所述第一目标图像中的所述目标对象进行处理,以使处理后的所述第一目标图像与处理前的所述第一目标图像存在差异。The target object in the first target image is processed so that there is a difference between the processed first target image and the unprocessed first target image.
  2. 根据权利要求1所述的图像处理方法,其中,所述确定与所述第一目标图像对应的第一场景,包括:The image processing method according to claim 1, wherein said determining the first scene corresponding to the first target image comprises:
    接收输入的表征第一场景的场景信息;以及receiving input scene information representing a first scene; and
    根据所述场景信息确定所述第一场景。Determine the first scene according to the scene information.
  3. 根据权利要求1所述的图像处理方法,其中,所述确定与所述第一目标图像对应的第一场景,包括:The image processing method according to claim 1, wherein said determining the first scene corresponding to the first target image comprises:
    获取所述第一目标图像的目标图像特征数据;以及acquiring object image characteristic data of the first object image; and
    基于图像特征数据与场景模型的对应关系,确定与所述目标图像特征数据对应的所述第一场景。The first scene corresponding to the target image feature data is determined based on the correspondence between the image feature data and the scene model.
  4. 根据权利要求1所述的图像处理方法,还包括:获取视频流数据;并且The image processing method according to claim 1, further comprising: acquiring video stream data; and
    所述获取第一目标图像包括:对所述视频流数据进行分帧处理,得到多个处理前的所述第一目标图像。The acquiring the first target image includes: performing frame division processing on the video stream data to obtain a plurality of first target images before processing.
  5. 根据权利要求4所述的图像处理方法,其中,所述对所述第一目标图像中的所述目标对象进行处理,以使处理后的所述第一目标图像与处理前的所述第一目标图像存在差异,包括:The image processing method according to claim 4, wherein the processing is performed on the target object in the first target image so that the processed first target image is different from the unprocessed first There are differences in target images, including:
    按照分帧处理得到的多个处理前的所述第一目标图像的时序关系,依次对每个处理前的所述第一目标图像中的所述目标对象进行处理,得到多个处理后的所述第一目标图像;以及According to the temporal relationship of the plurality of pre-processed first target images obtained by the frame division processing, the target objects in each pre-processed first target image are sequentially processed to obtain a plurality of post-processed all target objects. the first target image; and
    基于所述时序关系,将多个处理后的所述第一目标图像生成处理后的所述视频流数据,以使处理后的所述视频流数据与处理前的所述视频流数据存在差异。Based on the time sequence relationship, the processed video stream data is generated from a plurality of processed first target images, so that there is a difference between the processed video stream data and the unprocessed video stream data.
  6. 根据权利要求1所述的图像处理方法,还包括:获取第二场景;以及关闭与所述第二场景对应的权限对象;并且The image processing method according to claim 1, further comprising: acquiring a second scene; and closing a permission object corresponding to the second scene; and
    所述确定与所述第一目标图像对应的第一场景包括:基于所述第二场景,确定与所获取的所述第一目标图像对应的第一场景。The determining the first scene corresponding to the first target image includes: determining the first scene corresponding to the acquired first target image based on the second scene.
  7. 根据权利要求4所述的图像处理方法,还包括:获取第三场景;以及对所获取 的所述视频流数据中与所述第三场景对应的预设音频进行处理;并且The image processing method according to claim 4, further comprising: obtaining a third scene; and processing the preset audio corresponding to the third scene in the obtained video stream data; and
    所述对所述视频流数据进行分帧处理,得到多个处理前的所述第一目标图像包括:The step of performing frame division processing on the video stream data to obtain a plurality of first target images before processing includes:
    对处理所述预设音频后的所述视频流数据进行分帧处理,得到多个处理前的所述第一目标图像;以及performing frame division processing on the video stream data after processing the preset audio, to obtain a plurality of first target images before processing; and
    所述确定与所述第一目标图像对应的第一场景包括:The determining the first scene corresponding to the first target image includes:
    基于所述第三场景,确定与多个处理前的所述第一目标图像对应的第一场景。Based on the third scene, a first scene corresponding to a plurality of pre-processed first target images is determined.
  8. 根据权利要求1所述的图像处理方法,其中,所述对所述第一目标图像中的所述目标对象进行处理,包括:The image processing method according to claim 1, wherein said processing the target object in the first target image comprises:
    对所述第一目标图像中的所述目标对象进行掩盖处理。Perform masking processing on the target object in the first target image.
  9. 根据权利要求8所述的图像处理方法,其中,所述掩盖处理,至少包括以下之一:The image processing method according to claim 8, wherein the masking process includes at least one of the following:
    对所述目标对象进行模糊处理;或obfuscate said target object; or
    在所述目标对象上添加预设元素以覆盖所述目标对象;所述预设元素包括以下元素中的至少一者:纹理、图片、白板和马赛克。Adding a preset element on the target object to cover the target object; the preset element includes at least one of the following elements: texture, picture, whiteboard and mosaic.
  10. 根据权利要求1所述的图像处理方法,还包括:The image processing method according to claim 1, further comprising:
    接收输入的信息清除请求;以及receive incoming information removal requests; and
    基于所述信息清除请求,清除处理后的所述第一目标图像的属性信息。Based on the information clearing request, clear the processed attribute information of the first target image.
  11. 根据权利要求1所述的图像处理方法,其中,所述从所述第一目标图像中确定与所述第一场景对应的待处理的目标对象包括:The image processing method according to claim 1, wherein said determining the target object to be processed corresponding to the first scene from the first target image comprises:
    基于所述第一场景,确定与所述第一场景对应的保护对象标签;以及Based on the first scenario, determining a protected object label corresponding to the first scenario; and
    从所述第一目标图像中确定与所述保护对象标签对应的待处理的目标对象。A target object to be processed corresponding to the protected object label is determined from the first target image.
  12. 根据权利要求11所述的图像处理方法,其中,所述保护对象标签至少包括如下之一:The image processing method according to claim 11, wherein the protected object label includes at least one of the following:
    人体标签、人脸标签、文字标签、宠物标签、家具标签、汽车标签、声音标签和位置标签。Human body tags, face tags, text tags, pet tags, furniture tags, car tags, sound tags, and location tags.
  13. 一种图像处理装置,包括:An image processing device, comprising:
    图像获取模块,用于获取第一目标图像;An image acquisition module, configured to acquire the first target image;
    场景确定模块,用于确定与所述第一目标图像对应的第一场景;a scene determination module, configured to determine a first scene corresponding to the first target image;
    对象确定模块,用于从所述第一目标图像中确定与所述第一场景对应的待处理的目标对象;以及an object determination module, configured to determine a target object to be processed corresponding to the first scene from the first target image; and
    对象处理模块,用于对所述第一目标图像中的所述目标对象进行处理,以使处理后的所述第一目标图像与处理前的所述第一目标图像存在差异。An object processing module, configured to process the target object in the first target image, so that there is a difference between the processed first target image and the unprocessed first target image.
  14. 一种电子设备,包括:处理器和存储器,所述处理器用于执行所述存储器中存储的图像处理方法程序,以实现权利要求1~12中任一项所述的图像处理方法。An electronic device, comprising: a processor and a memory, the processor is used to execute the image processing method program stored in the memory, so as to realize the image processing method according to any one of claims 1-12.
  15. 一种存储介质,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现权利要求1~12中任一项所述的图像处理方法。A storage medium, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors, so as to realize the image processing described in any one of claims 1-12 method.
PCT/CN2022/125963 2021-12-13 2022-10-18 Image processing method and apparatus, and device and storage medium WO2023109299A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111521605.1 2021-12-13
CN202111521605.1A CN115499703A (en) 2021-12-13 2021-12-13 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023109299A1 true WO2023109299A1 (en) 2023-06-22

Family

ID=84465008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125963 WO2023109299A1 (en) 2021-12-13 2022-10-18 Image processing method and apparatus, and device and storage medium

Country Status (2)

Country Link
CN (1) CN115499703A (en)
WO (1) WO2023109299A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499703A (en) * 2021-12-13 2022-12-20 中兴通讯股份有限公司 Image processing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020054211A1 (en) * 2000-11-06 2002-05-09 Edelson Steven D. Surveillance video camera enhancement system
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video
CN108985176A (en) * 2018-06-20 2018-12-11 北京优酷科技有限公司 image generating method and device
CN109743579A (en) * 2018-12-24 2019-05-10 秒针信息技术有限公司 A kind of method for processing video frequency and device, storage medium and processor
CN110298862A (en) * 2018-03-21 2019-10-01 广东欧珀移动通信有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN110933299A (en) * 2019-11-18 2020-03-27 深圳传音控股股份有限公司 Image processing method and device and computer storage medium
CN113627339A (en) * 2021-08-11 2021-11-09 普联技术有限公司 Privacy protection method, device and equipment
CN115499703A (en) * 2021-12-13 2022-12-20 中兴通讯股份有限公司 Image processing method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011101161A (en) * 2009-11-05 2011-05-19 Canon Inc Imaging device, control method of the same, reproducing device, and program
CN105407261A (en) * 2014-08-15 2016-03-16 索尼公司 Image processing device and method, and electronic equipment
CN109274919A (en) * 2017-07-18 2019-01-25 福州瑞芯微电子股份有限公司 Method for secret protection, system, video call terminal and system in video calling
EP3564900B1 (en) * 2018-05-03 2020-04-01 Axis AB Method, device and system for a degree of blurring to be applied to image data in a privacy area of an image
CN110502974A (en) * 2019-07-05 2019-11-26 深圳壹账通智能科技有限公司 A kind of methods of exhibiting of video image, device, equipment and readable storage medium storing program for executing
CN112270018B (en) * 2020-11-11 2022-08-16 中国科学院信息工程研究所 Scene-sensitive system and method for automatically placing hook function

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020054211A1 (en) * 2000-11-06 2002-05-09 Edelson Steven D. Surveillance video camera enhancement system
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video
CN110298862A (en) * 2018-03-21 2019-10-01 广东欧珀移动通信有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN108985176A (en) * 2018-06-20 2018-12-11 北京优酷科技有限公司 image generating method and device
CN109743579A (en) * 2018-12-24 2019-05-10 秒针信息技术有限公司 A kind of method for processing video frequency and device, storage medium and processor
CN110933299A (en) * 2019-11-18 2020-03-27 深圳传音控股股份有限公司 Image processing method and device and computer storage medium
CN113627339A (en) * 2021-08-11 2021-11-09 普联技术有限公司 Privacy protection method, device and equipment
CN115499703A (en) * 2021-12-13 2022-12-20 中兴通讯股份有限公司 Image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115499703A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US11343446B2 (en) Systems and methods for implementing personal camera that adapts to its surroundings, both co-located and remote
US20190089910A1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US9007402B2 (en) Image processing for introducing blurring effects to an image
WO2018058934A1 (en) Photographing method, photographing device and storage medium
WO2019056527A1 (en) Capturing method and device
US10681308B2 (en) Electronic apparatus and method for controlling thereof
WO2017031901A1 (en) Human-face recognition method and apparatus, and terminal
TW201901527A (en) Video conference and video conference management method
US20160148068A1 (en) Image processing apparatus and method, and electronic device
CN110971841B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110213479B (en) Anti-shake method and device for video shooting
WO2023109299A1 (en) Image processing method and apparatus, and device and storage medium
US10666858B2 (en) Deep-learning-based system to assist camera autofocus
US11113998B2 (en) Generating three-dimensional user experience based on two-dimensional media content
KR102545408B1 (en) Method and apparatus for detecting occluded image and medium
WO2020011001A1 (en) Image processing method and device, storage medium and computer device
CN112561777A (en) Method and device for adding light spots to image
WO2022017006A1 (en) Video processing method and apparatus, and terminal device and computer-readable storage medium
WO2023125440A1 (en) Noise reduction method and apparatus, and electronic device and medium
CN114390197A (en) Shooting method and device, electronic equipment and readable storage medium
WO2019047409A1 (en) Image processing method and system, readable storage medium and mobile camera device
JP2024502117A (en) Image processing method, image generation method, device, equipment and medium
WO2017161542A1 (en) Skin map-aided skin smoothing of images using a bilateral filter
CN107563960A (en) A kind of processing method, storage medium and the mobile terminal of self-timer picture
CN108270973B (en) Photographing processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22906051

Country of ref document: EP

Kind code of ref document: A1