WO2021057267A1 - 图像处理方法及终端设备 - Google Patents

图像处理方法及终端设备 Download PDF

Info

Publication number
WO2021057267A1
WO2021057267A1 PCT/CN2020/106777 CN2020106777W WO2021057267A1 WO 2021057267 A1 WO2021057267 A1 WO 2021057267A1 CN 2020106777 W CN2020106777 W CN 2020106777W WO 2021057267 A1 WO2021057267 A1 WO 2021057267A1
Authority
WO
WIPO (PCT)
Prior art keywords
target image
terminal device
module
target
training sample
Prior art date
Application number
PCT/CN2020/106777
Other languages
English (en)
French (fr)
Inventor
任帅
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021057267A1 publication Critical patent/WO2021057267A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to an image processing method and terminal equipment.
  • terminal device technology users use terminal devices more and more frequently, and users have higher and higher requirements for the shooting function of terminal devices.
  • the camera in the terminal device uses optical zoom and digital zoom to increase the zoom factor of the camera, which can enlarge the distant scene during shooting, and then can shoot high-quality distant images.
  • the embodiments of the present disclosure provide an image processing method and terminal device to solve the problem of privacy information leakage in the process of remote shooting.
  • an embodiment of the present disclosure provides an image processing method, the method including:
  • the first zoom factor of the camera of the terminal device is greater than or equal to the predetermined threshold, acquiring the target image in the preview interface of the camera;
  • the above-mentioned target image is recognized, and if it is recognized that the target image contains preset privacy information, the target area in the target image is blurred, and the target area is the area where the preset privacy information is located.
  • the embodiments of the present disclosure also provide a terminal device, which includes:
  • An acquiring module configured to acquire the target image in the preview interface of the camera when the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold
  • the recognition module is used to recognize the above-mentioned target image acquired by the above-mentioned acquisition module
  • the blurring module is configured to blur the target area in the target image if the recognition module recognizes that the target image contains preset privacy information, and the target area is the area where the preset privacy information is located.
  • the embodiments of the present disclosure provide a terminal device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • the computer program is executed by the processor to achieve the following On the one hand, the steps of the image processing method.
  • embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the image processing method as described in the first aspect are implemented. .
  • the terminal device when the first zoom factor of the camera of the terminal device is greater than or equal to the predetermined threshold, the terminal device acquires the target image in the preview interface of the camera, and recognizes the target image. In the case that the target image contains preset private information, the terminal device blurs the area where the preset private information is located in the target image to avoid unintentional or malicious disclosure of other people's private information during remote shooting.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the disclosure
  • FIG. 3 is one of the schematic diagrams of an interface applied by an image processing method provided by an embodiment of the disclosure
  • FIG. 4 is a second schematic diagram of an interface applied by an image processing method provided by an embodiment of the disclosure.
  • FIG. 5 is one of the schematic structural diagrams of a terminal device provided by an embodiment of the disclosure.
  • FIG. 6 is the second structural diagram of a terminal device provided by an embodiment of the disclosure.
  • A/B can mean A or B
  • the "and/or" in this article is only an association relationship describing associated objects, indicating that there may be three A relationship, for example, A and/or B, can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • words such as “first” and “second” are used to describe the same items or similar items with basically the same function or effect. To distinguish, those skilled in the art can understand that words such as “first” and “second” do not limit the number and execution order.
  • the first zoom factor and the second zoom factor are used to distinguish different zoom factors, not to describe a specific sequence of zoom factors.
  • the execution subject of the image processing method provided by the embodiments of the present disclosure may be the aforementioned terminal device (including mobile terminal devices and non-mobile terminal devices), or may be a functional module and/or function in the terminal device that can implement the image processing method.
  • the entity can be specifically determined according to actual usage requirements, and the embodiment of the present disclosure does not limit it.
  • the following takes a terminal device as an example to illustrate the image processing method provided by the embodiment of the present disclosure.
  • the terminal device in the embodiment of the present disclosure may be a mobile terminal device or a non-mobile terminal device.
  • the mobile terminal device can be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (PDA)
  • the non-mobile terminal device may be a personal computer (PC), a television (television, TV), a teller machine, or a self-service machine, etc.; the embodiment of the present disclosure does not specifically limit it.
  • the terminal device in the embodiment of the present disclosure may be a terminal device with an operating system.
  • the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present disclosure.
  • the following takes the Android operating system as an example to introduce the software environment to which the image processing method provided in the embodiments of the present disclosure is applied.
  • FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
  • the architecture of the Android operating system includes 4 layers, namely: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
  • the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • the system runtime library layer includes a library (also called a system library) and an Android operating system runtime environment.
  • the library mainly provides various resources needed by the Android operating system.
  • the Android operating system operating environment is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software.
  • the kernel layer is based on the Linux kernel to provide core system services and hardware-related drivers for the Android operating system.
  • developers can develop software programs that implement the image processing method provided by the embodiments of the present disclosure based on the system architecture of the Android operating system as shown in FIG.
  • the processing method can be run based on the Android operating system as shown in FIG. 1. That is, the processor or the terminal device can implement the image processing method provided by the embodiment of the present disclosure by running the software program in the Android operating system.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure, including step 201 and step 202:
  • Step 201 When the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold, the terminal device acquires the target image in the preview interface of the camera.
  • the aforementioned zoom may include: optical zoom, digital zoom, and dual-camera zoom.
  • the terminal device may increase the zoom factor of the camera to enlarge the distant scene during shooting, so as to take a high-quality distant scene image.
  • the terminal device can automatically obtain the first zoom factor of the camera.
  • the first zoom factor of the camera is greater than or equal to a predetermined threshold (for example, the zoom factor is 3 times), this At this time, it can be considered that the camera of the terminal device enters the remote shooting mode, and the terminal device obtains the target image in the preview interface of the camera.
  • Step 202 The terminal device recognizes the above-mentioned target image, and if it recognizes that the target image contains preset privacy information, the terminal device blurs the target area in the target image.
  • the above-mentioned target area is the area where the preset private information is located, and the terminal device may determine the target area by acquiring the coordinates of the preset private information in the target image.
  • the aforementioned preset privacy information may include: improperly dressed residents in a distant building, payment codes (such as QR codes or barcodes) of customers in stores, and payment passwords of customers in stores. Specifically, It is set according to actual needs, and this disclosure does not limit this.
  • the terminal device may use a blur algorithm to blur the target area in the target image. It can be understood that blurring the target area in the target image is blurring the target area in the target image.
  • the above-mentioned fuzzy algorithm includes at least one of the following: Gaussian fuzzy filter algorithm, mean fuzzy filter algorithm, and Laplacian fuzzy filter algorithm, which can be specifically set according to actual needs, and the present disclosure does not limit this.
  • the terminal device acquires the camera The image of the distant building in the preview interface (that is, the above-mentioned target image, as shown in Figure 3, 31), when the terminal device recognizes that the image 31 shows an improperly dressed resident (that is, the above-mentioned preset privacy information) , As shown in Figure 4, the terminal device uses the Gaussian blur filter algorithm to blur the area (ie, the above-mentioned target area, as 42 in Figure 4) in the image of the distant building collected by the camera (ie 41 in Figure 4). ). It should be noted that the oblique line in 42 in FIG. 4 is used to indicate blur.
  • the terminal device can exit the remote shooting mode after recognizing that the above-mentioned target image contains preset privacy information.
  • the terminal device may adjust the first zoom factor of the aforementioned camera. Wherein, the adjusted zoom factor is less than the aforementioned predetermined threshold. It can be considered that the terminal device has exited the long-distance shooting mode, and no processing is performed on the above-mentioned target image at this time.
  • the terminal device may adjust the degree of blurring of the target area in the target image according to the aforementioned first zoom factor.
  • step 202 specifically includes the following steps 202a and 202b:
  • Step 202a The terminal device determines a target blur factor corresponding to the first zoom factor of the camera.
  • Step 202b The terminal device blurs the target area in the target image according to the target blur factor.
  • the blur factor of the image of the target area in the target image is proportional to the first zoom factor of the camera.
  • the blur factor can be adjusted by the size of the Gaussian blur kernel, and the blur factor is proportional to the first zoom factor of the camera, which can be understood as .
  • the size of the Gaussian blur kernel is proportional to the first zoom factor of the camera, that is, the larger the first zoom factor, the larger the corresponding Gaussian blur kernel, and the greater the blur degree of the target area.
  • the terminal device when the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold, the terminal device acquires the target image in the preview interface of the camera, and recognizes the target image, In the case of recognizing that the target image contains preset private information, the terminal device blurs the area where the preset private information is located in the target image to avoid unintentional or malicious disclosure of other people's private information during long-distance shooting .
  • the terminal device may also remind or warn the user that the target image contains preset privacy information in other ways.
  • the method further includes the following step 203:
  • Step 203 The terminal device sends out a warning message.
  • the terminal device can directly remind the user that the target image contains preset private information by means of vibration and text.
  • the terminal device recognizes that the above-mentioned target image contains preset privacy information, there is no obvious order in the execution process of the terminal device blurring the target area in the target image and the terminal device issuing the warning message.
  • the terminal device may issue a warning message before blurring the target area in the target image, or the terminal device may issue a warning message while blurring the target area in the target image, or the terminal device may blur the target image
  • a warning message is issued after the target area in the target area, which is not limited in the embodiment of the present disclosure.
  • the terminal device after the terminal device sends out a vibration warning signal to remind the user that the target image contains preset privacy information, the user can manually switch the shooting scene. At this time, the terminal device can reacquire the image in the preview interface of the camera. The image is the target image.
  • the terminal device when the terminal device blurs the above-mentioned target area, the terminal device can exit the remote shooting mode, which can be achieved in at least two possible implementation manners.
  • the method further includes the following step 204:
  • Step 204 The terminal device adjusts the first zoom factor of the aforementioned camera to the second zoom factor.
  • the adjusted second zoom factor is less than the aforementioned predetermined threshold.
  • the terminal device can actively adjust the focus. After blurring the above-mentioned target area, the terminal device directly adjusts the focus according to the predetermined second zoom factor (less than the above-mentioned predetermined threshold), that is, the terminal device exits remote shooting and no longer performs blurring processing on the target image.
  • the predetermined second zoom factor less than the above-mentioned predetermined threshold
  • the user can manually adjust the focus. After the terminal device blurs the aforementioned target area, the user can manually lower the first zoom factor of the camera of the terminal device. At this time, when the terminal device obtains that the second zoom factor of the camera is less than the predetermined threshold, it can be considered as the terminal device The long-distance shooting mode has been exited, and the target image is no longer blurred at this time, that is, the terminal device enters normal shooting.
  • the zoom factor is directly proportional to the degree of blur, in the process of manually lowering the first zoom factor of the camera of the terminal device, the degree of blurring of the target area of the terminal device can change with the zoom factor. It is smaller and lowered until the terminal device no longer blurs the target image.
  • the target image is recognized. If it is recognized that the target image contains preset privacy information, the zoom factor of the camera can be directly reduced, that is, the first zoom factor is reduced to the second zoom factor, and the second zoom factor is less than The predetermined threshold may not blur the target area where the preset private information is located.
  • the terminal device after the terminal device obtains the target image currently collected by the camera, it needs to identify whether the target image contains preset privacy information.
  • step 202 specifically includes the following step A1:
  • Step A1 The terminal device uses the image detection model to recognize the above-mentioned target image, and obtains the recognition result.
  • the aforementioned recognition result is used to indicate whether the aforementioned target image contains preset privacy information.
  • the method further includes the following step A2:
  • Step A2 The terminal device trains the above-mentioned image detection model according to the training samples in the training sample library.
  • the aforementioned training sample library includes at least one training sample, and each training sample in the aforementioned at least one training sample contains preset privacy information.
  • the aforementioned training sample may be an image collected by a user through a terminal device with an image collection function (for example, a camera, a mobile phone, etc.).
  • an image collection function for example, a camera, a mobile phone, etc.
  • the method further includes the following step B1:
  • Step B1 If the terminal device recognizes that the target image contains preset privacy information, the terminal device uses the target image as a training sample and stores it in the training sample library.
  • the terminal device recognizes that the above-mentioned target image contains preset privacy information, and can store the target image in a sample library, so that the training samples in the training sample library can be enriched, and the recognition result of the image detection model can be improved. accuracy.
  • the image detection model recognizes the above-mentioned target image and finds that the recognition result is incorrect (for example, the target image contains preset privacy information, but the recognition result indicates that it does not contain preset privacy information), in this case, the The target image is used as a training sample and stored in the above-mentioned training sample library, and the above-mentioned image detection model is retrained with the training samples in the supplemented training sample library to improve the accuracy of the recognition result obtained after the image detection model recognizes the target image Sex.
  • FIG. 5 is a schematic diagram of a possible structure for implementing a terminal device provided by an embodiment of the present disclosure.
  • the terminal device 600 includes: an acquisition module 601, an identification module 602, and a blurring module 603, where:
  • the acquiring module 601 is configured to acquire the target image in the preview interface of the camera when the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold.
  • the recognition module 602 is configured to recognize the above-mentioned target image acquired by the above-mentioned acquisition module.
  • the blurring module 603 is configured to blur the target area in the target image if the recognition module 602 recognizes that the target image contains preset privacy information, and the target area is the area where the preset privacy information is located.
  • the terminal device further includes: an adjustment module 604, wherein: the adjustment module 604 is configured to adjust the first zoom factor of the camera to a second zoom factor, and the second zoom factor is smaller than the aforementioned The predetermined threshold.
  • the recognition module 602 is specifically configured to use an image detection model to recognize the target image acquired by the acquisition module 601 to obtain a recognition result, and the recognition result is used to indicate whether the target image contains preset privacy information.
  • the terminal device further includes: a training module 605 and a storage module 606, wherein: the above-mentioned training module 605 is configured to train the above-mentioned image detection model according to the training samples in the training sample library;
  • the training sample library includes at least one training sample, and each training sample in the at least one training sample contains preset privacy information.
  • the storage module 606 is configured to, if it is recognized that the target image contains preset privacy information, use the target image acquired by the acquisition module 601 as a training sample and store it in the training sample library.
  • the terminal device further includes: a determining module 607, wherein: the determining module 607 is configured to determine a target blur factor corresponding to the first zoom factor; the blurring module 602 specifically uses According to the target blur factor determined by the determination module 607, the target area in the target image is blurred.
  • the terminal device further includes: a sending module 608, wherein: the above-mentioned sending module 608 is configured to send a warning message.
  • the terminal device when the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold, the terminal device acquires the target image in the preview interface of the camera, and recognizes the target image. When it is recognized that the target image contains preset private information, the terminal device blurs the area where the preset private information is located in the target image to avoid unintentional or malicious disclosure of other people's private information during remote shooting.
  • the terminal device provided in the embodiment of the present disclosure can implement each process implemented by the terminal device in the foregoing method embodiment, and to avoid repetition, details are not described herein again.
  • the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111 and other components.
  • a radio frequency unit 101 for example, a radio frequency unit
  • a network module 102 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111 and other components.
  • a radio frequency unit 101 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106, a
  • the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
  • the processor 110 is configured to obtain the target image in the preview interface of the camera when the first zoom factor of the camera of the terminal device is greater than or equal to a predetermined threshold; it is also used to recognize the target image. If the target image contains preset privacy information, the target area in the target image is blurred, and the target area is the area where the preset privacy information is located.
  • the terminal device when the zoom factor of the first camera of the terminal device is greater than or equal to a predetermined threshold, the terminal device acquires a target image in the preview interface of the camera, and recognizes the target image. When it is recognized that the target image contains preset private information, the terminal device blurs the area where the preset private information is located in the target image to avoid unintentional or malicious disclosure of other people's private information during remote shooting.
  • the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device 100 provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output it as sound. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the terminal device 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the terminal device (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 107 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the terminal device 100.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 110, the command sent by the processor 110 is received and executed.
  • the touch panel 1071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 1071 can be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 according to The type of touch event provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 can be combined.
  • the input and output functions of the terminal device 100 are realized by integration, which is not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device with the terminal device 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) and transmit the received input to one or more elements in the terminal device 100 or can be used to connect to the terminal device 100 and external devices. Transfer data between devices.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the terminal device 100. It uses various interfaces and lines to connect the various parts of the entire terminal device 100, runs or executes software programs and/or modules stored in the memory 109, and calls and stores them in the memory 109.
  • the data of the terminal device 100 performs various functions and processing data, so as to monitor the terminal device 100 as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the terminal device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal device 100 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present disclosure further provides a terminal device, including a processor, a memory, and a computer program stored in the memory and running on the processor 110.
  • the computer program realizes the above-mentioned image when executed by the processor.
  • Each process of the embodiment of the processing method can achieve the same technical effect. In order to avoid repetition, it will not be repeated here.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk, or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

本公开实施例提供一种图像处理方法及终端设备,该方法包括:在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取该摄像头的预览界面中的目标图像;对该目标图像进行识别,若识别出该目标图像包含预设隐私信息,则虚化该目标图像中的目标区域,该目标区域为预设隐私信息所在区域。

Description

图像处理方法及终端设备
相关申请的交叉引用
本公开主张在2019年09月24日提交国家知识产权局、申请号为201910907111.3、申请名称为“图像处理方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及通信技术领域,尤其涉及一种图像处理方法及终端设备。
背景技术
随着终端设备技术的发展,用户使用终端设备的频率越来越高,用户对终端设备的拍摄功能的要求也越来越高。
相关技术中,终端设备中的摄像头通过光学变焦和数码变焦来调高摄像头的变焦倍数,可以放大拍摄时的远方景物,进而可以拍摄出高画质的远景图像。
然而,用户通过终端设备即可轻易拍出高画质的远景图像,这也使得用户可能不小心拍到他人隐私,或者一些不法分子也会别有用心偷拍他人隐私,进而导致在远距离拍摄的过程中,存在泄露他人隐私信息的问题。
发明内容
本公开实施例提供一种图像处理方法及终端设备,以解决在远距离拍摄的过程中所存在的隐私信息泄露的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本公开实施例提供一种图像处理方法,该方法包括:
在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取该摄像头的预览界面中的目标图像;
对上述目标图像进行识别,若识别出该目标图像包含预设隐私信息,则虚化该目标图像中的目标区域,该目标区域为预设隐私信息所在区域。
第二方面,本公开实施例还提供了一种终端设备,该终端设备包括:
获取模块,用于在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取该摄像头的预览界面中的目标图像;
识别模块,用于对上述获取模块获取到的上述目标图像进行识别;
虚化模块,用于若上述识别模块识别出上述目标图像包含预设隐私信息,则虚化该目标图像中的目标区域,该目标区域为预设隐私信息所在区域。
第三方面,本公开实施例提供了一种终端设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现如第一方面所述的图像处理方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介 质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面所述的图像处理方法的步骤。
在本公开实施例中,在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,终端设备获取该摄像头的预览界面中的目标图像,并对该目标图像进行识别,在识别出该目标图像包含预设隐私信息的情况下,终端设备通过虚化该目标图像中预设隐私信息所在的区域,避免在远距离拍摄的过程中所存在的无意或者恶意泄露他人隐私信息。
附图说明
图1为本公开实施例提供的一种可能的安卓操作***的架构示意图;
图2为本公开实施例提供的一种图像处理方法流程示意图;
图3为本公开实施例提供的一种图像处理方法所应用的界面的示意图之一;
图4为本公开实施例提供的一种图像处理方法所应用的界面的示意图之二;
图5为本公开实施例提供的一种终端设备的结构示意图之一;
图6为本公开实施例提供的一种终端设备的结构示意图之二。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,本文中的“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
需要说明的是,本文中的“多个”是指两个或多于两个。
需要说明的是,本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更可选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
需要说明的是,为了便于清楚描述本公开实施例的技术方案,在本公开实施例中,采用了“第一”、“第二”等字样对功能或作用基本相同的相同项或相似项进行区分,本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定。例如,第一变焦倍数和第二变焦倍数是用于区别不同的变焦倍数,而不是用于描述变焦倍数的特定顺序。
本公开实施例提供的图像处理方法的执行主体可以为上述的终端设备(包括移动终端设备和非移动终端设备),也可以为该终端设备中能够实现该图像处理方法的功能模块和/或功能实体,具体的可以根据实际使用需求确定,本公开实施例不作限定。下面以终端设备为例,对本公开实施例提供的图像处理方法进行示例性的说明。
本公开实施例中的终端设备可以为移动终端设备,也可以为非移动终端设备。移动终端设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载终端设备、可穿戴 设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等;非移动终端设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等;本公开实施例不作具体限定。
本公开实施例中的终端设备可以为具有操作***的终端设备。该操作***可以为安卓(Android)操作***,可以为ios操作***,还可以为其他可能的操作***,本公开实施例不作具体限定。
下面以安卓操作***为例,介绍一下本公开实施例提供的图像处理方法所应用的软件环境。
如图1所示,为本公开实施例提供的一种可能的安卓操作***的架构示意图。在图1中,安卓操作***的架构包括4层,分别为:应用程序层、应用程序框架层、***运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作***中的各个应用程序(包括***应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。
***运行库层包括库(也称为***库)和安卓操作***运行环境。库主要为安卓操作***提供其所需的各类资源。安卓操作***运行环境用于为安卓操作***提供软件环境。
内核层是安卓操作***的操作***层,属于安卓操作***软件层次的最底层。内核层基于Linux内核为安卓操作***提供核心***服务和与硬件相关的驱动程序。
以安卓操作***为例,本公开实施例中,开发人员可以基于上述如图1所示的安卓操作***的***架构,开发实现本公开实施例提供的图像处理方法的软件程序,从而使得该图像处理方法可以基于如图1所示的安卓操作***运行。即处理器或者终端设备可以通过在安卓操作***中运行该软件程序实现本公开实施例提供的图像处理方法。
下面结合图2所示的图像处理方法流程图对本公开实施例的图像处理方法进行说明,图2为本公开实施例提供的一种图像处理方法流程示意图,包括步骤201和步骤202:
步骤201:在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,终端设备获取该摄像头的预览界面中的目标图像。
在本公开实施例中,上述的变焦可以包括:光学变焦、数码变焦和双摄变焦。
示例性的,终端设备可以通过调高摄像头的变焦倍数,来放大拍摄时的远方景物,以拍到高画质的远景图像。在本公开实施例中,当用户进行拍摄时,终端设备可以自动获取摄像头的第一变焦倍数,当该摄像头的第一变焦倍数大于或等于预定阈值(例如,变焦倍数为3倍)时,此时可以认为该终端设备的摄像头进入远距离拍摄模式,终端设备获取该摄像头的预览界面中的目标图像。
步骤202:终端设备对上述目标图像进行识别,若识别出该目标图像包含预设隐私信息,终端设备则虚化该目标图像中的目标区域。
在本公开实施例中,上述目标区域为预设隐私信息所在区域,终端设备可以通过获取该预设隐私信息在该目标图像中的坐标来确定该目标区域。
示例性的,上述的预设隐私信息可以包括:远处楼房中穿戴不整齐的住户,商店里的顾客的付款码(如,二维码或条形码),商店里的顾客的付款密码,具体可以根据实际需求设定,本公开对此不做限制。
示例性的,终端设备可以利用模糊算法来虚化该目标图像中的目标区域,可以理解的是,虚化该目标图像中的目标区域即为对该目标图像中的目标区域模糊处理。
示例性的,上述的模糊算法包括以下至少一项:高斯模糊滤波算法,均值模糊滤波算法,拉普拉斯模糊滤波算法,具体可以根据实际需求设定,本公开对此不做限制。
举例说明,当用户将终端设备的摄像头的第一变焦倍数调为5倍,大于变焦倍数为3倍(即上述的预定阈值),此时,如图3中的所示,终端设备获取该摄像头的预览界面中的远处楼房的图像(即上述的目标图像,如图3中的31),当终端设备识别出该图像31中显示一个穿戴不整齐的住户(即上述的预设隐私信息),如图4所示,终端设备利用高斯模糊滤波算法虚化该住户在摄像头采集的远处楼房的图像(即图4中41)中的区域(即上述的目标区域,如图4中的42)。需要说明的是,图4中的42中的斜线用于表示虚化。
此外,终端设备在识别出上述目标图像包含预设隐私信息之后,可以退出远距离拍摄模式。示例性的,终端设备可以调节上述摄像头的第一变焦倍数。其中,调节后的变焦倍数小于上述预定阈值。可以认为终端设备已退出远距离拍摄模式,此时不再对上述的目标图像做任何处理。
可选地,在本公开实施例中,终端设备可以根据上述的第一变焦倍数来调节目标图像中的目标区域的虚化程度。
示例性的,上述的步骤202具体包括如下步骤202a和步骤202b:
步骤202a:终端设备确定与该摄像头的第一变焦倍数对应的目标模糊倍数。
步骤202b:终端设备按照上述目标模糊倍数虚化上述目标图像中的目标区域。
在一种示例中,目标图像中目标区域的图像的模糊倍数与摄像头的第一变焦倍数成正比。
举例说明,假设终端设备利用高斯模糊滤波算法虚化上述目标图像中的目标区域,则模糊倍数可以由高斯模糊核的大小来调节,且模糊倍数与摄像头的第一变焦倍数成正比,可以理解为,高斯模糊核的大小与摄像头的第一变焦倍数成正比,即第一变焦倍数越大,对应的高斯模糊核越大,则目标区域的虚化程度越大。
本公开实施例提供的图像处理方法,在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,终端设备获取该摄像头的预览界面中的目标图像,并对该目标图像进行识别,在识别出该目标图像包含预设隐私信息的情况下,终端设备通过虚化该目标图像中预设隐私信息所在的区域,避免在远距离拍摄的过程中所存在的无意或者恶意泄露他人隐私信息。
可选地,在本公开实施例中,在上述目标图像包含预设隐私信息的情况下,终端设备还可以通过其他方式来提醒或警告用户目标图像中包含预设隐私信息。
示例性的,若终端设备识别出上述目标图像包含预设隐私信息,该方法还包括如 下步骤203:
步骤203:终端设备发出警告信息。
示例性的,上述的警告信息可以包括:振动警告信号和/或文字警告信息。例如,当终端设备识别出上述目标图像包含预设隐私信息,则终端设备控制终端设备振动(即上述的振动警告信号),并在屏幕上显示“该图像包含隐私信息”(即上述的文字警告信息)。
这样终端设备可以用振动和文字的方式直接提醒用户目标图像中包含预设隐私信息。
示例性的,若终端设备识别出上述目标图像包含预设隐私信息,终端设备虚化该目标图像中的目标区域和终端设备发出警告信息在执行过程中没有明显的先后顺序。终端设备可以在虚化该目标图像中的目标区域之前发出警告信息,或者,终端设备可以在虚化该目标图像中的目标区域的同时发出警告信息,或者,终端设备可以在虚化该目标图像中的目标区域之后发出警告信息,本公开实施例对此不作限定。
在一种示例中,在终端设备发出振动警告信号,提醒用户目标图像包含预设隐私信息之后,用户可以手动切换拍摄场景,此时,终端设备可以重新获取该摄像头的预览界面中的图像,该图像即为目标图像。
可选地,在本公开实施例中,在终端设备虚化上述的目标区域的情况下,终端设备可以退出远距离拍摄模式,可以通过至少两种可能实现的方式来实现。
示例性的,在上述的步骤该方法还包括如下步骤204:
步骤204:终端设备调节上述摄像头的第一变焦倍数为第二变焦倍数。
其中,调节后的第二变焦倍数小于上述预定阈值。
在第一种可能实现的方式中,终端设备可以主动调焦。终端设备在虚化上述的目标区域之后,直接按照预定的第二变焦倍数(小于上述的预定阈值)进行调焦,即终端设备退出远距离拍摄,不再对目标图像作虚化处理。
在第二种可能实现的方式中,用户可以手动调焦。终端设备在虚化上述的目标区域之后,用户可以手动调低终端设备的摄像头的第一变焦倍数,此时,当终端设备获取到上述摄像头的第二变焦倍数小于预定阈值时,可以认为终端设备已退出远距离拍摄模式,此时不再对目标图像做虚化处理,即终端设备进入正常拍摄。
需要说明的是,若变焦倍数与虚化程度成正比,在用户手动调低终端设备的摄像头的第一变焦倍数的过程中,终端设备对该目标区域的虚化程度可以随着变焦倍数的变小而降低,直至终端设备不再对目标图像作虚化处理。
可选地,对目标图像进行识别,若识别出该目标图像包含预设隐私信息,可以直接调低摄像头的变焦倍数,即将第一变焦倍数调低至第二变焦倍数,该第二变焦倍数小于预定阈值,可以不对预设隐私信息所在的目标区域进行虚化。
可选地,在本公开实施例中,在终端设备获取该摄像头当前采集的目标图像之后,需要对目标图像中是否包含预设隐私信息进行识别。
示例性的,上述的步骤202具体包括如下步骤A1:
步骤A1:终端设备采用图像检测模型对上述目标图像进行识别,得到识别结果。
在本公开实施例中,上述识别结果用于指示上述目标图像是否包含预设隐私信息。
可选地,在本公开实施例中,在上述步骤A1之前,该方法还包括如下步骤A2:
步骤A2:终端设备根据训练样本库中的训练样本训练上述图像检测模型。
在本公开实施例中,上述训练样本库包括至少一个训练样本,上述至少一个训练样本中的每个训练样本均包含预设隐私信息。
示例性的,上述的训练样本可以为用户通过具有图像采集功能的终端设备(例如,照相机、手机等)所采集的图像。
示例性的,基于上述的步骤202中终端设备对上述目标图像进行识别之后,该方法还包括如下步骤B1:
步骤B1:若终端设备识别出上述目标图像包含预设隐私信息,终端设备将该目标图像作为训练样本,存储至上述训练样本库中。
示例性的,终端设备识别出上述目标图像包含预设隐私信息,可以将该目标图像存储至样本库中,从而可以丰富训练样本库中的训练样本,进而可以提高该图像检测模型的识别结果的准确性。
在一种示例中,当图像检测模型对上述目标图像进行识别得到识别结果有误(例如:目标图像包含预设隐私信息,但识别结果指示不包含预设隐私信息),此时,可以将该目标图像作为训练样本,存储至上述训练样本库中,用补充后的训练样本库中的训练样本重新训练上述图像检测模型,以提高该图像检测模型对目标图像进行识别后得到的识别结果的准确性。
图5为实现本公开实施例提供的一种终端设备的可能的结构示意图,如图5所示,终端设备600包括:获取模块601、识别模块602和虚化模块603,其中:
获取模块601,用于在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取该摄像头的预览界面中的目标图像。
识别模块602,用于对上述获取模块获取到的上述目标图像进行识别。
虚化模块603,用于若上述识别模块602识别出上述目标图像包含预设隐私信息,则虚化该目标图像中的目标区域,该目标区域为预设隐私信息所在区域。
可选地,如图5所示,该终端设备还包括:调节模块604,其中:上述调节模块604,用于调节上述摄像头的第一变焦倍数为第二变焦倍数,该第二变焦倍数小于上述预定阈值。
可选地,上述识别模块602,具体用于采用图像检测模型对上述获取模块601获取的上述目标图像进行识别,得到识别结果,该识别结果用于指示该目标图像是否包含预设隐私信息。
可选地,如图5所示,该终端设备还包括:训练模块605和存储模块606,其中:上述训练模块605,用于根据训练样本库中的训练样本训练上述图像检测模型;其中,该训练样本库包括至少一个训练样本,该至少一个训练样本中的每个训练样本均包含预设隐私信息。上述存储模块606,用于若识别出上述目标图像包含预设隐私信息,将上述获取模块601获取的该目标图像作为训练样本,存储至上述训练样本库中。
可选地,如图5所示,该终端设备还包括:确定模块607,其中:上述确定模块607,用于确定与上述第一变焦倍数对应的目标模糊倍数;上述虚化模块602,具体用于按照上述确定模块607确定的目标模糊倍数,虚化上述目标图像中的目标区域。
可选地,如图5所示,该终端设备还包括:发送模块608,其中:上述发送模块608,用于发出警告信息。
本公开实施例提供的终端设备,在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,终端设备获取该摄像头的预览界面中的目标图像,并对该目标图像进行识别,在识别出该目标图像包含预设隐私信息的情况下,终端设备通过虚化该目标图像中预设隐私信息所在的区域,避免在远距离拍摄的过程中所存在的无意或者恶意泄露他人隐私信息。
本公开实施例提供的终端设备能够实现上述方法实施例中终端设备实现的各个过程,为避免重复,这里不再赘述。
图6为实现本公开各个实施例的一种终端设备的硬件结构示意图,该终端设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图6中示出的终端设备100的结构并不构成对终端设备的限定,终端设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备100包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端设备、可穿戴设备、以及计步器等。
其中,处理器110,用于在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取该摄像头的预览界面中的目标图像;还用于对该目标图像进行识别,若识别出该目标图像包含预设隐私信息,则虚化该目标图像中的目标区域,该目标区域为预设隐私信息所在区域。
本公开实施例提供的终端设备,在终端设备的第一摄像头的变焦倍数大于或等于预定阈值的情况下,终端设备获取该摄像头的预览界面中的目标图像,并对该目标图像进行识别,在识别出该目标图像包含预设隐私信息的情况下,终端设备通过虚化该目标图像中预设隐私信息所在的区域,避免在远距离拍摄的过程中所存在的无意或者恶意泄露他人隐私信息。
应理解的是,本公开实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信***与网络和其他设备通信。
终端设备100通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与终端设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕 获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
终端设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在终端设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与终端设备100的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
可选地,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图6中,触控面板1071与显示面板1061是作为两个独立的部件来实现终端设备100的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现终端设备100的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与终端设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视 频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端设备100内的一个或多个元件或者可以用于在终端设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是终端设备100的控制中心,利用各种接口和线路连接整个终端设备100的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行终端设备100的各种功能和处理数据,从而对终端设备100进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
终端设备100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理***与处理器110逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
另外,终端设备100包括一些未示出的功能模块,在此不再赘述。
可选地,本公开实施例还提供一种终端设备,包括处理器,存储器,存储在存储器上并可在所述处理器110上运行的计算机程序,该计算机程序被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算 机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (14)

  1. 一种图像处理方法,所述方法包括:
    在终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取所述摄像头的预览界面中的目标图像;
    对所述目标图像进行识别,若识别出所述目标图像包含预设隐私信息,则虚化所述目标图像中的目标区域,所述目标区域为所述预设隐私信息所在区域。
  2. 根据权利要求1所述的方法,其中,所述虚化所述目标图像中的目标区域之后,所述方法还包括:
    调节所述摄像头的第一变焦倍数为第二变焦倍数,所述第二变焦倍数小于所述预定阈值。
  3. 根据权利要求1所述的方法,其中,所述对所述目标图像进行识别,包括:
    采用图像检测模型对所述目标图像进行识别,得到识别结果,所述识别结果用于指示所述目标图像是否包含所述预设隐私信息。
  4. 根据权利要求3所述的方法,其中,所述采用图像检测模型对所述目标图像进行识别,得到识别结果之前,所述方法还包括:
    根据训练样本库中的训练样本训练所述图像检测模型;其中,所述训练样本库包括至少一个训练样本,所述至少一个训练样本中的每个训练样本均包含所述预设隐私信息;
    所述对所述目标图像进行识别之后,所述方法还包括:
    若识别出所述目标图像包含所述预设隐私信息,将所述目标图像作为训练样本,存储至所述训练样本库中。
  5. 根据权利要求1所述的方法,其中,所述虚化所述目标图像中的目标区域,包括:
    确定与所述第一变焦倍数对应的目标模糊倍数;
    按照所述目标模糊倍数,虚化所述目标图像中的目标区域。
  6. 根据权利要求1所述的方法,其中,所述若识别出所述目标图像包含预设隐私信息之后,所述方法还包括:
    发出警告信息。
  7. 一种终端设备,所述终端设备包括:
    获取模块,用于在所述终端设备的摄像头的第一变焦倍数大于或等于预定阈值的情况下,获取所述摄像头的预览界面中的目标图像;
    识别模块,用于对所述获取模块获取到的所述目标图像进行识别;
    虚化模块,用于若所述识别模块识别出所述目标图像包含预设隐私信息,则虚化所述目标图像中的目标区域,所述目标区域为所述预设隐私信息所在区域。
  8. 根据权利要求7所述的终端设备,还包括:调节模块,其中:
    所述调节模块,用于调节所述摄像头的第一变焦倍数为第二变焦倍数,所述第二变焦倍数小于所述预定阈值。
  9. 根据权利要求7所述的终端设备,其中,
    所述识别模块,具体用于采用图像检测模型对所述获取模块获取的所述目标图像 进行识别,得到识别结果,所述识别结果用于指示所述目标图像是否包含所述预设隐私信息。
  10. 根据权利要求9所述的终端设备,还包括:训练模块和存储模块,其中:
    所述训练模块,用于根据训练样本库中的训练样本训练所述图像检测模型;其中,所述训练样本库包括至少一个训练样本,所述至少一个训练样本中的每个训练样本均包含所述预设隐私信息;
    所述存储模块,用于若识别出所述目标图像包含所述预设隐私信息,将所述识别模块识别出的所述目标图像作为训练样本,存储至所述训练样本库中。
  11. 根据权利要求7所述的终端设备,还包括:确定模块,其中:
    所述确定模块,用于确定与所述第一变焦倍数对应的目标模糊倍数;
    所述虚化模块,具体用于按照所述确定模块确定的所述目标模糊倍数,虚化所述目标图像中的目标区域。
  12. 根据权利要求7所述的终端设备,还包括:发送模块,其中:
    所述发送模块,用于发出警告信息。
  13. 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
  14. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
PCT/CN2020/106777 2019-09-24 2020-08-04 图像处理方法及终端设备 WO2021057267A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910907111.3A CN110719402B (zh) 2019-09-24 2019-09-24 图像处理方法及终端设备
CN201910907111.3 2019-09-24

Publications (1)

Publication Number Publication Date
WO2021057267A1 true WO2021057267A1 (zh) 2021-04-01

Family

ID=69210032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/106777 WO2021057267A1 (zh) 2019-09-24 2020-08-04 图像处理方法及终端设备

Country Status (2)

Country Link
CN (1) CN110719402B (zh)
WO (1) WO2021057267A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463264A (zh) * 2021-12-28 2022-05-10 浙江大华技术股份有限公司 高空抛物监测中隐私保护方法及装置
CN114692202A (zh) * 2022-03-31 2022-07-01 马上消费金融股份有限公司 图像处理方法、装置、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110719402B (zh) * 2019-09-24 2021-07-06 维沃移动通信(杭州)有限公司 图像处理方法及终端设备
CN111339831B (zh) * 2020-01-23 2023-08-18 深圳市大拿科技有限公司 一种照明灯控制方法和***
CN113742183A (zh) * 2020-05-29 2021-12-03 青岛海信移动通信技术股份有限公司 一种录屏方法、终端及存储介质
CN112040145B (zh) * 2020-08-28 2023-04-07 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN112257123A (zh) * 2020-09-07 2021-01-22 西安万像电子科技有限公司 图像的处理方法及***
CN112057874A (zh) * 2020-09-10 2020-12-11 重庆五诶科技有限公司 具备隐私保护的游戏辅助***及方法
CN112182648A (zh) * 2020-09-18 2021-01-05 支付宝(杭州)信息技术有限公司 一种隐私图像、人脸隐私的处理方法、装置及设备
CN114815638A (zh) * 2021-08-25 2022-07-29 北京京东方技术开发有限公司 设备配置方法、电子装置和计算机可读存储介质
CN113987602A (zh) * 2021-10-28 2022-01-28 维沃移动通信有限公司 图像数据的处理方法和电子设备
CN117135307A (zh) * 2022-08-08 2023-11-28 惠州Tcl移动通信有限公司 摄像头监控方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653041A (zh) * 2016-01-29 2016-06-08 北京小米移动软件有限公司 显示状态调整方法及装置
CN107122679A (zh) * 2017-05-16 2017-09-01 北京小米移动软件有限公司 图像处理方法及装置
US9848167B1 (en) * 2016-06-21 2017-12-19 Amazon Technologies, Inc. Low bandwidth video
CN107958161A (zh) * 2017-11-30 2018-04-24 维沃移动通信有限公司 一种多任务显示方法及移动终端
CN108366196A (zh) * 2018-01-25 2018-08-03 西安中科创达软件有限公司 一种保护图片隐私的方法
CN110719402A (zh) * 2019-09-24 2020-01-21 维沃移动通信(杭州)有限公司 图像处理方法及终端设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100771137B1 (ko) * 2005-02-21 2007-10-30 삼성전자주식회사 프라이버시영역을 마스크 처리하는 감시시스템 및 마스크영역 설정방법
JP5088161B2 (ja) * 2008-02-15 2012-12-05 ソニー株式会社 画像処理装置、カメラ装置、通信システム、画像処理方法、およびプログラム
WO2012004907A1 (ja) * 2010-07-06 2012-01-12 パナソニック株式会社 画像配信装置
EP2771865A4 (en) * 2011-10-25 2015-07-08 Sony Corp APPARATUS, METHOD AND PROGRAM PRODUCT OF IMAGE PROCESSING COMPUTER
JP6687488B2 (ja) * 2015-12-24 2020-04-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 無人飛行体及びその制御方法
JP6910772B2 (ja) * 2016-09-08 2021-07-28 キヤノン株式会社 撮像装置、撮像装置の制御方法およびプログラム
CN106803930A (zh) * 2017-02-10 2017-06-06 上海斐讯数据通信技术有限公司 一种基于路由器的智能视频监控方法及智能路由器
CN109413323A (zh) * 2017-08-15 2019-03-01 联发科技(新加坡)私人有限公司 图像处理方法、拍照设备、及存储介质
CN108848334A (zh) * 2018-07-11 2018-11-20 广东小天才科技有限公司 一种视频处理的方法、装置、终端和存储介质
CN109040594B (zh) * 2018-08-24 2020-12-18 创新先进技术有限公司 拍照方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653041A (zh) * 2016-01-29 2016-06-08 北京小米移动软件有限公司 显示状态调整方法及装置
US9848167B1 (en) * 2016-06-21 2017-12-19 Amazon Technologies, Inc. Low bandwidth video
CN107122679A (zh) * 2017-05-16 2017-09-01 北京小米移动软件有限公司 图像处理方法及装置
CN107958161A (zh) * 2017-11-30 2018-04-24 维沃移动通信有限公司 一种多任务显示方法及移动终端
CN108366196A (zh) * 2018-01-25 2018-08-03 西安中科创达软件有限公司 一种保护图片隐私的方法
CN110719402A (zh) * 2019-09-24 2020-01-21 维沃移动通信(杭州)有限公司 图像处理方法及终端设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463264A (zh) * 2021-12-28 2022-05-10 浙江大华技术股份有限公司 高空抛物监测中隐私保护方法及装置
CN114692202A (zh) * 2022-03-31 2022-07-01 马上消费金融股份有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN110719402A (zh) 2020-01-21
CN110719402B (zh) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2021057267A1 (zh) 图像处理方法及终端设备
WO2021098678A1 (zh) 投屏控制方法及电子设备
CN108513070B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
WO2021104197A1 (zh) 对象跟踪方法及电子设备
WO2021104195A1 (zh) 图像显示方法及电子设备
WO2021098603A1 (zh) 预览画面显示方法及电子设备
WO2020156120A1 (zh) 通知消息显示方法及移动终端
CN111562896B (zh) 投屏方法及电子设备
CN108307106B (zh) 一种图像处理方法、装置及移动终端
CN109523253B (zh) 一种支付方法和装置
CN110602389B (zh) 一种显示方法及电子设备
WO2021057712A1 (zh) 接近检测方法及终端设备
CN110138967B (zh) 一种终端的操作控制方法及终端
WO2021104266A1 (zh) 对象显示方法及电子设备
CN110830713A (zh) 一种变焦方法及电子设备
WO2021082772A1 (zh) 截屏方法及电子设备
WO2020156119A1 (zh) 应用程序界面调整方法及移动终端
CN109992192B (zh) 一种界面显示方法及终端设备
CN109104573B (zh) 一种确定对焦点的方法及终端设备
EP3816768A1 (en) Object recognition method and mobile terminal
WO2021104265A1 (zh) 电子设备及对焦方法
CN108243489B (zh) 一种拍照控制方法及移动终端
WO2021036504A1 (zh) 图片删除方法及终端设备
CN111026263B (zh) 一种音频播放方法及电子设备
CN111131930B (zh) 设备资源控制方法、第一电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870238

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20870238

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20870238

Country of ref document: EP

Kind code of ref document: A1