CN113987602A - Image data processing method and electronic equipment - Google Patents

Image data processing method and electronic equipment Download PDF

Info

Publication number
CN113987602A
CN113987602A CN202111265900.5A CN202111265900A CN113987602A CN 113987602 A CN113987602 A CN 113987602A CN 202111265900 A CN202111265900 A CN 202111265900A CN 113987602 A CN113987602 A CN 113987602A
Authority
CN
China
Prior art keywords
image
target
user
processing
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111265900.5A
Other languages
Chinese (zh)
Inventor
向永航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111265900.5A priority Critical patent/CN113987602A/en
Publication of CN113987602A publication Critical patent/CN113987602A/en
Priority to PCT/CN2022/127267 priority patent/WO2023072038A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a processing method of image data and electronic equipment, and belongs to the field of artificial intelligence. The method comprises the following steps: acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user; and when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, carrying out privacy processing on the first image or outputting prompt information.

Description

Image data processing method and electronic equipment
Technical Field
The application belongs to the field of artificial intelligence, and particularly relates to a processing method of image data and electronic equipment.
Background
With the rapid development of the field of artificial intelligence, in the process of business processing, some application programs need to call camera applications to call cameras to acquire face image data of target users, and then perform corresponding business processing based on the face image data, for example, perform business processing such as face recognition and video conferencing based on the face image data.
However, in some application programs, during business processing, the user may be in an environment exposed to privacy and not be self-aware, and thus private data of the user is leaked.
Disclosure of Invention
The embodiment of the application aims to provide an image data processing method and electronic equipment, and the problem that private data of a user is leaked in the prior art can be solved.
In a first aspect, an embodiment of the present application provides a method for processing image data, where the method includes:
acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user;
and when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, carrying out privacy processing on the first image or outputting prompt information.
In a second aspect, an embodiment of the present application provides an electronic device, including:
the first image acquisition module is used for acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user;
and the first image processing module is used for carrying out privacy processing on the first image or outputting prompt information under the condition that the matching degree of the first image and the display content of the interface is smaller than or equal to a target value.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the method of processing image data according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method for processing image data according to the first aspect.
In the embodiment of the application, a first image collected by a camera is obtained, the first image includes a face image area of a target user, and when a matching degree of the first image and a display content of an interface is smaller than or equal to a target value, the first image is subjected to privacy processing, or prompt information is output, that is, by matching the first image actually collected by the camera with a content actually displayed by the first image on the interface of the target application, if the matching degree is smaller than or equal to the target value, the first image actually collected by the camera cannot be completely displayed on the interface of the target application, and a part of the first image which is not displayed on the interface of the target application may include privacy data of the user, so that there is a risk that the privacy data of the user is leaked and is not known by itself, and before the first image is transmitted to the target application, the part of the first image which is not displayed on the interface of the target application needs to be subjected to privacy processing, so that an image which does not contain user privacy data is obtained, or prompt information is output, the risk that the user privacy data are leaked due to the fact that the user privacy data exist in the part of the first image which is not displayed on the interface of the target application can be reduced, the leakage of the privacy data of the target user is fundamentally avoided, the effect of protecting the user privacy data is achieved, and the safety of the image data shot when the target application calls a camera to shoot is improved.
Drawings
Fig. 1 is a first flowchart illustrating a method for processing image data according to an embodiment of the present application;
fig. 2 is a second flowchart of a method for processing image data according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an implementation principle of a first image privacy protection process of a processing method of image data according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating an implementation principle of a second image privacy protection process of the image data processing method according to the embodiment of the present application;
fig. 5 is a third flowchart illustrating a method for processing image data according to an embodiment of the present application;
fig. 6 is a schematic diagram of a first module composition of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a second module composition of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the processing method of image data and the electronic device provided in the embodiments of the present application with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a first flowchart of a method for processing image data according to an embodiment of the present application, where the method in fig. 1 can be executed by an electronic device, and in particular, a processor in the electronic device, as shown in fig. 1, the method at least includes the following steps:
s102, acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user;
wherein, before acquiring the first image collected by the camera, the method further comprises: the target application calls the camera application to call the camera to acquire the first image.
Specifically, the electronic device calls a camera application to call a camera to acquire a first image of a user if a current service processing node needs to acquire image data of the user when detecting an access request of the user for a specified service of a target application; for example, when a user uses a payment application, if the current service processing node is detected to be face identification, a camera application is called to call a camera to acquire a first image of the user; for another example, when the user uses the video sharing application program, if it is detected that the current service processing node is video creation shooting, the camera application is called to call the camera to acquire the first image of the user.
Further, in consideration of the difference of the image display modes of the target application, the first image may be partially displayed in the interface of the target application, that is, the first image acquired by the target application calling the camera is not completely displayed in the interface, and since the part of the first image that is not displayed in the interface of the target application may contain the privacy data of the user, if the first image is directly transmitted to the target application, there may be a risk that the user privacy data is leaked due to the user not observing the complete first image, and therefore it is necessary to perform privacy processing on the first image, or output prompt information, based on which, the first image acquired by using the camera is acquired by the processor in the electronic device before the target application acquires the first image, and the first image may include the face area, and the face area of the target user acquired by using the camera, And other image regions other than the face image region, the other image regions may include at least one of other user face image regions other than the target user face, non-face image regions of the target user, non-face image regions of other users, image regions of the environment in which the target user is currently located;
and S104, carrying out privacy processing on the first image or outputting prompt information when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value.
Specifically, if the matching degree of the display content of the first image acquired by the camera and the interface of the target application is smaller than or equal to the target value, determining that the display mode of the target application is image partial area display, namely the first image is partially displayed in the interface of the target application; and if the matching degree of the first image acquired by the camera and the display content of the interface is greater than the target value, determining that the display mode of the target application is the display of all the areas of the image, namely the first image is displayed in all the areas of the interface of the target application.
Specifically, for a situation that a matching degree of a first image acquired by a camera and display content of an interface is smaller than or equal to a target value, that is, an image display mode of a target application is image partial area display, after a processor in the electronic device acquires the first image acquired by the camera, the processor needs to perform privacy processing on the first image, or output prompt information; the above-mentioned way of performing privacy processing on the first image may be to perform image replacement processing on a target image area in other image areas except for the face image area of the target user, that is, replace or fill the target image area with public data, for example, a preset image (which may be a background with a uniform color or another background), and then obtain a second image that does not include the privacy data of the target user, and transmit the second image to the target application; the manner of outputting the prompt information may be to display the prompt information, voice prompt, vibration, and the like to the user in the interface to prompt the target user to adjust the pose, for example, in the form of a floating window, to display the first image (i.e., the content of the picture actually shot by the camera), so as to prompt the target user to adjust the pose based on the first image, i.e., to adjust the pose based on the actually shot picture displayed on the floating window in real time, so as to ensure that the subsequently collected image does not include the privacy data of the user.
In the embodiment of the application, by matching the first image actually acquired by the camera with the content of the first image actually displayed on the interface of the target application, if the matching degree is less than or equal to the target value, the first image actually acquired by the camera cannot be completely displayed on the interface of the target application, and a part of the first image not displayed on the interface of the target application may contain the private data of the user, so that there is a risk that the private data of the user is leaked and is not known by itself, and therefore before the first image is transmitted to the target application, it is necessary to perform privacy processing on the part of the first image not displayed on the interface of the target application, so as to obtain an image not containing the private data of the user, or output prompt information, so that the risk that the private data of the user is leaked due to the existence of the private data of the part of the first image not displayed on the interface of the target application can be reduced, therefore, the leakage of the privacy data of the target user is avoided fundamentally, the effect of protecting the privacy data of the user is achieved, and the safety of the image data shot by the target application when the camera is called to shoot is improved.
Considering that the second image obtained by performing the privacy processing on the first image is a second image not containing the user privacy data, the second image may be directly transmitted to the target application, and based on this, as shown in fig. 2, after performing the privacy processing on the first image or outputting the prompt information, the step S104 further includes:
and S106, transmitting a second image obtained based on the first image to a target application, and carrying out corresponding business processing by the target application based on the second image.
Specifically, the specific application scenarios of the image data processing method provided by the present application may be: the method comprises the following steps that when corresponding business processing is required to be carried out by using target applications such as face recognition, video conference, short video release and the like, camera applications are required to be called so as to call a camera to shoot a scene; wherein the target application and the camera application are different applications;
in specific implementation, aiming at an application scene of face recognition, a target application needs to perform face recognition on a target user based on a face image area of the target user acquired by a camera; aiming at an application scene of a video conference, a target application needs to display portrait data of a target user acquired by a camera on a display interface (display screen); for an application scene of short video distribution, a target application needs to display short video data of a target user, which is acquired by a camera, on a display interface (display screen).
In specific implementation, taking a face recognition scene as an example, because the target application only recognizes the face data of the target user when performing face recognition on the target user, only the face image area of the target user in the first image acquired by the camera is displayed on the display interface of the target application, that is, the matching degree of the first image and the display content of the interface is less than or equal to the target value (that is, the first image is partially displayed on the display interface of the target application), but the first image acquired by the camera includes not only the face image area of the target user but also other image areas except the face image area of the target user, so that the target user cannot observe other image areas which are acquired by the camera and transmitted to the target application and may contain the privacy data of the target user, thereby causing leakage of the privacy data of the target user; based on this, in order to protect the privacy data of the target user, before the target application acquires the first image, the first image of the target user (i.e. the face image area of the target user and other image areas except the face image area of the target user) acquired by using the camera is firstly acquired by the processor of the electronic device, the processor then performs privacy processing on other image areas of the target user in the first image, further obtaining a second image, transmitting the second image obtained based on privacy processing of the first image to the target application, namely, the combination of the face image area of the target user and the other image area after privacy processing is determined as a second image, and then the second image is transmitted to the target application, enabling the target application to perform face recognition on the target user based on the face image area of the target user in the second image; or outputting prompt information, and displaying the first image (namely the picture content really shot by the camera) in the form of a floating window to prompt the target user to adjust the pose based on the first image, namely the pose based on the really shot picture displayed on the floating window in real time, so as to ensure that the subsequently collected image does not contain the privacy data of the user, and further ensure that the subsequently collected image does not contain the privacy data of the user.
In the application, the processor collects and transmits the camera to the target application in advance, carries out privacy protection processing on other image areas except the face image area of the target user, possibly containing the privacy data of the target user, and finally transmits the image areas to the other image areas of the target application without containing the privacy data of the target user, so that the purpose of avoiding the leakage of the privacy data of the user from the source is achieved, and the effect of protecting the privacy data of the user is further realized.
In consideration of performing privacy processing on the first image, replacing the target image area containing the privacy data in the first image with public data, and based on this, performing privacy processing on the first image in step S104 specifically includes: carrying out image replacement processing on a target image area in other image areas except the face image area of the target user in the first image;
specifically, under the condition that the matching degree of the first image and the display content of the interface is smaller than or equal to a target value (namely, the image display mode of the target application is image partial area display), firstly, a first image of a target user acquired by a camera is acquired, then, image replacement processing is carried out on a target image area containing target user privacy data in other image areas of the first image except the face image area of the target user, the privacy data of the target user are replaced by publicable data, and then, a second image not containing the target user privacy data is obtained;
further, the transmitting the second image not containing the target user privacy data to the target application, and based on this, transmitting the second image obtained based on the first image to the target application specifically includes: generating a second image of the target user based on the other image area after the image replacement and the face image area of the target user; the second image is transmitted to a target application.
Considering that privacy processing can be performed on the first image, prompt information can be output, so that a target user can perform pose adjustment based on the prompt information to ensure that images acquired by subsequent cameras do not contain privacy data of the user, based on which, in the step S104, outputting the prompt information specifically includes: and displaying the first image in a form of a floating window to prompt a target user to adjust the pose based on the first image.
Specifically, while the display screen of the electronic device is in the original image display mode of the target application, the image picture actually shot by the camera may be completely displayed in the form of a thumbnail or a floating window (that is, the first image is not only partially displayed on the display interface of the target application according to the original image display mode of the target application, but also completely displayed on the thumbnail or the floating window).
Further, when the target user determines that the first image does not contain the user privacy data after the pose adjustment, the method may transmit the image collected by the camera to the target application, and after outputting the prompt information, further includes: acquiring a third image acquired by using a camera under the condition of receiving confirmation input of finishing the pose adjustment of the target user; the third image is transmitted to a target application.
Specifically, a confirmation input of the end of the pose adjustment of the target user fed back based on the completely displayed first image observed in real time by the target user is received (that is, a confirmation input of the target user not including the privacy data of the target user in the first image fed back based on the completely displayed first image observed in real time by the target user is received), based on the confirmation input, a third image acquired by using the camera and subjected to the pose adjustment is acquired, and because the third image does not include the privacy data of the target user, that is, based on the pose adjusted first image, the third image not including the privacy information of the user is determined, the processor in the electronic device can directly transmit the third image to the target application after receiving the third image acquired by the camera.
In consideration of the fact that different privacy processing manners or a combination of multiple privacy processing manners can be adopted for different situations to increase flexibility of privacy processing and further improve security of user privacy data, it is possible to combine two manners of performing privacy processing on other image areas except for the face image area of the target user in the first image of the target user and outputting prompt information.
Specifically, a first image of a target user is displayed in a form of a floating window to prompt the target user to adjust the pose based on the first image, and a confirmation input of the end of pose adjustment fed back by the target user based on the first image observed in real time (namely, a confirmation input of target user privacy data is not included in the first image) is received, in order to improve the accuracy of the confirmation input, a processor of the electronic device may first acquire a first image acquired by using a camera after the pose adjustment of the target user is performed, and perform privacy processing on the first image under the condition that the matching degree of the first image and the display content of an interface of a target application is smaller than or equal to a target value, namely perform privacy risk identification on the first image, and perform image replacement processing on a target image area including user privacy data in the first image if the risk of privacy leakage is identified, determining a second image not containing the user privacy data target user based on the first image after the image replacement processing; or directly carrying out image replacement processing on the target image area in the first image, and determining a second image which does not contain the target user of the user privacy data based on the first image after the image replacement processing.
In the embodiment of the application, different privacy protection processing modes such as the adjustment of the pose of the target user and the image replacement processing of the target image area in the first image area are provided by setting the output prompt information, so that the flexibility of privacy protection is improved, and the two modes are combined, so that the safety of user privacy data can be further improved.
Further, in view of the manner of outputting the prompt information, the method prompts the target user to perform pose adjustment, and acquires a third image acquired by using the camera when receiving a confirmation input of the end of pose adjustment of the target user, and specifically includes:
displaying a face image area of a target user and other image areas except the face image area of the target user in a floating window form on a display interface of a target application;
specifically, a floating window pops up on a designated area of a display interface of the target application, wherein the floating window can be at least one of a semi-transparent floating window and a fully transparent floating window, and displays a complete first image of the target user (namely a face image area of the target user and other image areas except the face image area of the target user) on the floating window;
prompting a target user to perform pose adjustment based on the first image of the target user displayed in the floating window;
wherein the pose adjustment may include at least one of pose adjustment of the electronic device (i.e., adjustment of the pose of the camera), and pose adjustment of the target user; the pose adjustment standard is that the target user does not contain the privacy data of the target user in a first image area displayed in the floating window through real-time observation;
acquiring a third image acquired by using the camera under the condition of receiving confirmation input of finishing the pose adjustment of the target user;
the third image acquired after receiving the confirmation input of the end of the pose adjustment of the target user (i.e., the third image after the pose adjustment of the target user) does not contain the privacy data of the target user (i.e., the image area other than the face image area of the target user whose target of the pose adjustment is adjusted does not contain the privacy data of the user).
Specifically, as shown in fig. 3, a floating window pops up on the display interface of the target application, the floating window can completely display a first image (i.e., the face image area of the target user and the other image areas except the face image area of the target user) to the target user, i.e., the display effect shown in (b) in fig. 3, and prompt the target user to perform pose adjustment based on the first image of the target user displayed in the floating window, and in the case of receiving confirmation input of the end of pose adjustment of the target user, i.e., when receiving confirmation input that the first image fed back by the target user based on the image observed in the interface of the target application does not include privacy data of the target user, the first image after the pose adjustment is determined as a third image (i.e., a third image collected by using a camera is obtained), and the third image is transmitted to the target application, so that the target application performs corresponding business processing based on the third image.
Further, in regard to the above manner of performing the privacy processing on the first image, performing the image replacement processing on the target image area in the other image area except for the face image area of the target user in the first image of the target user specifically includes:
step one, determining a target image area in other image areas except a face image area of a target user in a first image; wherein the target image area includes: other image areas or image areas not displayed on the interface in other image areas;
specifically, for the case that the target image area includes other image areas, the first image is subjected to image recognition processing, the area where the face image area of the target user is located is determined, and the image area outside the area where the face image area is located is determined as the target image area; determining the undisplayed area as a target image area based on a position relation between a prestored display area and an undisplayed area of a display interface of the target application under the condition that the target image area comprises undisplayed image areas on interfaces in other image areas; specifically, the default image may be first transmitted to the target application (the image is transmitted to the target application by the processor and can be disclosed), then the image partial region display region displayed on the display interface of the target application by the default image is acquired, and the proportional relationship between the image partial region display region in the default image and the complete image display region is calculated.
Secondly, performing image replacement processing on the target image area based on a preset image;
specifically, the image content in the target image region in the first image is replaced with a preset image, that is, the target image region is subjected to image replacement processing, that is, the target image region in the first image region is updated, wherein the preset image may be an open background image such as a mosaic background and a background with a uniform color.
Further, a combination of the face image region of the target user in the first image and the image region other than the face image region of the target user after the image replacement processing is determined as a second image of the target user.
Specifically, when the target image area is the other image area, image replacement processing is performed on the other image area except the face image area of the target user in the first image based on the preset image, so as to obtain an updated first image (i.e., the other image area in the first image is replaced by the preset image); as shown in fig. 4, in the case that the target image area is an image area not displayed on the interface in the other image areas of the first image, the first image (including the displayed image area and the non-displayed image area) is displayed on the display interface of the target application, that is, the display effect shown in (a) in fig. 4, wherein the complete display effect of the first image is (b) in fig. 4, the image areas not displayed in the other image areas except the face image area of the target user in the first image are replaced by the preset image, so as to obtain the combination of the face image area of the target user (that is, the face image area of the target user in the displayed image area), the image areas displayed in the other image areas except the face image area of the target user, and the image areas not displayed in the other image areas after the image replacement processing, that is, the display effect shown in (c) of fig. 4, the other image regions include combinations of image regions (i.e., preset images) that are not displayed in the other image regions after the image replacement processing, and image regions that are displayed on the interface in the other image regions.
In the embodiment of the application, in the way of performing privacy processing on the first image, when the target image area in the first image of the target user is subjected to image replacement, the range of the target image area is flexibly set, so that the flexibility of privacy protection processing is further improved.
In specific implementation, after the first image acquired by using the camera is acquired, directly triggering to execute an image privacy protection mechanism, that is, directly triggering step S104 to perform privacy processing on the first image, or outputting a prompt message; further, considering that the risk of leakage of the private data is relatively high only when the image display mode of the target application is the image partial area display, in order to improve the pertinence of the image privacy protection and achieve the purpose of improving the business processing response efficiency of the target application while ensuring the security of the user private data, that is, it is first confirmed that the display content matching degree of the interface between the first image and the target application is less than or equal to the target value, that is, only when the image display mode of the target application is determined to be the image partial area display, the triggering of the image privacy protection mechanism is performed, specifically, as shown in fig. 5, step S104 includes, when the display content matching degree of the first image and the interface is less than or equal to the target value, performing privacy processing on the first image, or before outputting the prompt information:
s108, acquiring display content of the interface;
s110, comparing the first image acquired by the camera with the display content of the interface, and determining the matching degree of the first image and the display content.
Specifically, after the first image captured by the camera is acquired, the first image may include privacy data of a user, and therefore the first image cannot be directly transmitted to the target application, so that the step S108 may be executed while the first image captured by the camera is acquired, so as to ensure that the display content on the interface and the first image captured by the camera are captured in the same shooting environment and have a part of the same image content, so that the first image and the display content may be subjected to image content comparison, the matching degree of the first image and the display content is determined, and the image display mode of the target application is further determined. In step S108, the obtaining of the display content of the interface may be performed before the first image collected by the camera is obtained in S102, or performed after the first image collected by the camera is obtained in S102, so as to solve the problem that the target application may automatically invoke the camera to collect the first image of the target user, thereby revealing the user privacy data.
Specifically, comparing a first image acquired by a camera with the display content of an interface, determining the matching degree of the first image and the display content, and if the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, determining that the first image of a target user on the display interface of the target application is displayed as an image partial area; and if the matching degree of the first image and the display content of the interface is greater than the target value, determining that the first image of the target user on the display interface of the target application is displayed in the whole image area.
Further, in the process of determining the matching degree between the first image acquired by the camera and the display content of the target application interface, in addition to the manner of acquiring the first image acquired by the camera and simultaneously acquiring the display content of the interface and comparing the first image acquired by the camera with the display content of the interface, the matching degree between the first image acquired by the camera and the display content of the target application interface may be determined based on a default image, specifically, the matching degree between the first image acquired by the camera and the display content of the target application interface may be determined based on the default image every time the target application starts to acquire the first image of the target user by the camera, and then the image display manner of the target application is determined, that is, whether the first image is displayed in a partial image area or in a whole image area is determined; the display content matching degree between the first image and the target application interface is determined based on the default image under the condition that the target application meets the preset use condition, such as the target application is installed for the first time or the target application has version updating, and the like, so that the image display mode of the target application is determined, and the image display mode of the target application is stored (for example, image partial area display or image whole area display), and then the first image can be determined to be image partial area display or image whole area display directly based on the pre-stored display mode label of the target application;
specifically, if it is determined that the display mode of the target application is the image partial area display, the privacy process is performed on the first image or the presentation information is output in step S104.
If the display mode of the target application is determined to be that the whole image area is displayed, directly transmitting the first image of the target user to the target application; in order to further ensure the security of the private data, after the display mode of the target application is determined to be the display of all areas of the image, the target user may be prompted to confirm that the first image displayed in the interface of the target application has no risk of privacy disclosure, and after the confirmation information of the target user is received, the first image of the target user is transmitted to the target application, so that the target application performs corresponding business processing based on the first image.
Specifically, the process of determining the display content matching degree of the first image acquired by the camera and the target application interface based on the default image and further determining the image display mode of the target application specifically includes:
displaying a default image on a display interface of a target application, and performing screenshot on the display interface to obtain a fourth image;
the default image is different from the first image of the target user acquired by the camera, is a default image (public image) directly transmitted to the target application by the processor, and does not contain privacy data of the target user.
Comparing the display content of the first image acquired by the camera with the display content of the default image in the interface of the target application, determining the matching degree of the first image and the display content, namely judging whether the area occupied by the same image between the default image and the fourth image is less than or equal to a target value or not;
wherein, the area occupied by the same image is specifically the area occupied by the same image between the default image and the fourth image on the fourth image (i.e. the display interface); specifically, if the occupied area is smaller than or equal to the target value (that is, the matching degree of the first image and the display content of the interface is smaller than or equal to the target value), it is determined that the image display mode of the target application is image partial area display, and then privacy processing is performed on the first image, or prompt information is output; if the occupied area is larger than the target value (namely the matching degree of the first image and the display content of the interface is larger than the target value), determining that the image display mode of the target application is the display of all the areas of the image, and directly transmitting the first image of the target user to the target application;
in specific implementation, the process of determining the image display mode of the target application may be determined based on a default image each time, that is, the process of determining the image display mode of the target application based on the default image is executed each time it is detected that the target application calls a camera to shoot;
or, the determination may be directly performed based on a pre-stored image display mode tag of the target application; specifically, the image display mode of the target application (i.e. the image display mode tag of the target application) is determined in advance based on the default image, and the image display mode of the target image is marked, for example, a correspondence relationship between the target application and the image display mode tag may be stored, where the image display mode tag includes: displaying the partial area of the image or displaying the whole area of the image; then, in the use process of the target application, the image display mode corresponding to the target application can be rapidly determined directly based on the image display mode label; if the image display mode is image partial area display, determining that the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, and further performing privacy processing on the first image or outputting prompt information.
Specifically, the above-mentioned determination process of the image display mode is executed once in advance for each designated application (including the target application and other application programs having the function of calling the camera application) installed on the electronic device (i.e. it is determined whether the image display mode of the designated application for the default image is the image partial region display or the image full region display), and the image display mode of the designated application is marked according to the determination result of the image display mode, further, in the case that the image display mode is the target application calling the camera (camera application) displayed in the image partial region, after the first image collected by the camera is obtained, the privacy processing is first automatically executed in step S104, then the second image obtained based on the first image is transmitted to the target application, or prompt information is output, so as to prompt the user to adjust the pose.
Furthermore, considering that there may be a situation that the image display mode of the image data of the target user is changed in the target application, the image display mode label of the target application needs to be changed after the image display mode is changed, and therefore, a verification mechanism for the image display mode label of the target application is introduced; specifically, when it is detected that the application version update exists in the target application, the first to second steps are executed, that is, the image display mode of the target application is determined again based on the default image, the original image display mode is updated based on the new image display mode, and whether to trigger execution of the step S104 for performing the privacy processing on the first image or outputting the prompt information is determined based on the updated image display mode tag.
Further, considering that there may be a case where the image display mode of the target application is changed even if the version update of the target application is not detected, and since there may be a case where the user privacy data is leaked when the image display mode is changed but the original image display mode of the target application is not updated for the case where the target application is marked as the image full area display, the tag verification process may be performed for the target application marked as the image full area display at a predetermined time interval, and specifically, for the target application whose display mode tag is the image full area display, the image display mode of the target application is determined based on the default image not only once when the application version update is detected but also at a predetermined time interval, the image display method of the target application is determined based on the default image according to a preset time interval (that is, it is determined whether the image display method of the designated application for the default image is image partial area display or image full area display), and then it is determined whether the image display method label of the target application needs to be updated according to the identification result of the image display method.
In the embodiment of the application, the image display mode label of the target application is recorded in advance, so that in the use process of the target application, it is possible to determine whether the image display manner of the target application is the image partial area display or the image full area display directly based on the image display manner tag without performing the determination of the image display manner of the target application based on the default image once at a time, and therefore, the response efficiency of the target application to the user service request can be improved, but the version of the application may be updated, which causes the display mode of the shooting interface of the application to change, therefore, the display mode label is changed, and therefore, the above label verification mechanism needs to be introduced to verify the image display mode of the target application, so as to improve the accuracy of privacy protection for the first image, thereby further improving the security of the private data of the target user.
Further, in a case that it is determined that the matching degree of the first image and the display content of the interface is smaller than or equal to the target value, that is, it is determined that the first image is displayed for the image partial area on the display interface of the target application (that is, the image display mode of the target application is displayed for the image partial area), the first image may be directly subjected to privacy processing, or prompt information may be output; furthermore, considering that the first image is displayed on the display interface of the target application in the image partial area, the first image may include private data that cannot be observed by the user, and there is a risk that the private data of the user is leaked, and correspondingly, the first image may not include the private data of the user, so in order to improve the pertinence of the image privacy protection processing, and thus, in the case of ensuring the security of the private data of the user, the first image may be subjected to image recognition to determine whether the private data of the user is included, and only in the case of including the private data of the user, the process of performing the privacy protection processing on the first image is executed, based on which, when it is determined that the matching degree of the display content of the first image and the interface is less than or equal to the target value, that is, in the case of determining that the image display mode of the target application is the image partial area display, privacy processing is performed on the first image, or prompt information is output, and the method specifically includes:
carrying out image recognition on the first image by using a pre-trained image recognition model to obtain a privacy risk recognition result; and if the privacy risk identification result indicates that privacy leakage risks exist, performing privacy processing on the first image or outputting prompt information.
The method comprises the steps of performing iterative training and updating on model parameters of a preset image recognition model by adopting an existing machine learning method to obtain updated model parameters until model functions corresponding to the image recognition model are converged, and then obtaining a trained image recognition model, wherein the image recognition model is used for performing image recognition on a first image to obtain a privacy risk recognition result.
When the image recognition model is used for prediction, if a first image input into the pre-trained image recognition model contains privacy data of a target user, determining that the recognition result of the image recognition model is that privacy leakage risks exist; if the privacy risk identification result indicates that privacy leakage risks exist, performing privacy processing on the first image to obtain a second image which does not contain target user privacy data, and transmitting the second image to a target application to perform related business processing; or outputting prompt information to enable the target user to adjust the pose based on the image content observed in real time.
Specifically, after the matching degree of the first image and the display content of the interface is determined to be less than or equal to the target value, that is, after the image display mode of the target application is determined to be image partial area display, privacy processing may be directly performed on the first image, further, in order to improve the pertinence of the privacy processing, privacy risk recognition may be performed on the first image first, and if the first image is recognized to include the privacy data of the target user, privacy processing may be performed on the first image; for the situation of increasing the process of identifying the private data in the first image, corresponding prompt information can be output based on the image area where the private data in the identified first image containing the private data is located, and the target user is prompted to perform pose adjustment, for example, the target user can be prompted to ' privacy disclosure, angle adjustment request ', ' privacy disclosure, clothing wearing request ', privacy disclosure and shooting place replacement ', and in specific implementation, the image area where the private data is located can be marked in the first image displayed in the floating window, so that the user can quickly identify the location where the private data is located; and/or directly replacing the image area where the privacy data are located in the first image containing the privacy data with a preset image.
If the first image input into the pre-trained image recognition model does not contain the privacy data of the target user, the recognition result of the image recognition model is determined to be free of privacy disclosure risks, the first image is directly transmitted to the target application to perform corresponding business processing, further, in order to improve the accuracy of the recognition result, the user can be prompted to confirm that no privacy disclosure risks exist, and after the confirmation information of the user is detected, the first image is transmitted to the target application to perform corresponding business processing.
In the image data processing method in the embodiment of the application, a first image collected by a camera is obtained, the first image includes a facial image area of a target user, and when a matching degree of the first image and a display content of an interface is smaller than or equal to a target value, the first image is subjected to privacy processing, or prompt information is output, that is, by matching the first image actually collected by the camera with a content actually displayed by the first image on the interface of a target application, if the matching degree is smaller than or equal to the target value, the first image actually collected by the camera cannot be completely displayed on the interface of the target application, and a part of the first image that is not displayed on the interface of the target application may include privacy data of the user, so that there is a risk that the privacy data of the user is leaked and is not self-known, and before the first image is transmitted to the target application, the part of the first image which is not displayed on the interface of the target application needs to be subjected to privacy processing, so that an image which does not contain user privacy data is obtained, or prompt information is output, the risk that the user privacy data are leaked due to the fact that the user privacy data exist in the part of the first image which is not displayed on the interface of the target application can be reduced, the leakage of the privacy data of the target user is fundamentally avoided, the effect of protecting the user privacy data is achieved, and the safety of the image data shot when the target application calls a camera to shoot is improved.
It should be noted that, in the image data processing method provided in the embodiment of the present application, the execution subject may be an electronic device, in particular, a processor in the electronic device, or a control module in the electronic device for executing the image data processing method. The electronic device provided by the embodiment of the present application is described with an example of a processing method for executing image data by an electronic device.
On the basis of the same technical concept, the embodiment of the present application further provides an electronic device corresponding to the method for processing image data provided by the foregoing embodiment, and fig. 6 is a schematic diagram illustrating a module composition of the electronic device provided by the embodiment of the present application, where the electronic device is configured to execute the method for processing image data described in fig. 1 to 5, and as shown in fig. 6, the electronic device includes:
a first image obtaining module 602, configured to obtain a first image collected by a camera, where the first image includes a face image region of a target user;
the first image processing module 604 is configured to perform privacy processing on the first image or output prompt information when the matching degree between the first image and the display content of the interface is smaller than or equal to a target value.
In the embodiment of the application, by matching the first image actually acquired by the camera with the content of the first image actually displayed on the interface of the target application, if the matching degree is less than or equal to the target value, the first image actually acquired by the camera cannot be completely displayed on the interface of the target application, and a part of the first image not displayed on the interface of the target application may contain the private data of the user, so that there is a risk that the private data of the user is leaked and is not known by itself, and therefore before the first image is transmitted to the target application, it is necessary to perform privacy processing on the part of the first image not displayed on the interface of the target application, so as to obtain an image not containing the private data of the user, or output prompt information, so that the risk that the private data of the user is leaked due to the existence of the private data of the part of the first image not displayed on the interface of the target application can be reduced, therefore, the leakage of the privacy data of the target user is avoided fundamentally, the effect of protecting the privacy data of the user is achieved, and the safety of the image data shot by the target application when the camera is called to shoot is improved.
Optionally, as shown in fig. 7, the electronic device further includes a second image transmission module 606, configured to:
and transmitting a second image obtained based on the first image to a target application, wherein the target application performs corresponding business processing based on the second image.
Optionally, as shown in fig. 7, the electronic device further includes:
an interface content obtaining module 608, configured to obtain display content of an interface;
an image comparison module 610, configured to perform image content comparison on the first image and the display content, and determine a matching degree between the first image and the display content.
Optionally, the first image further includes: an image region other than the target user's facial image region;
the first image processing module 604 is specifically configured to:
carrying out image replacement processing on a target image area in the other image areas;
the second image transmission module 606 is specifically configured to:
generating a second image of the target user based on the other image areas after the image replacement and the face image area of the target user;
transmitting the second image to a target application.
Optionally, the first image processing module 604 is further specifically configured to:
determining a target image area of the other image areas, the target image area comprising: the other image area or an image area which is not displayed on the interface in the other image area;
and performing image replacement processing on the target image area based on a preset image.
Optionally, the first image processing module 604 is further specifically configured to:
displaying the first image in a form of a floating window to prompt the target user to adjust a pose based on the first image.
Optionally, the electronic device further includes:
a third image acquisition module, configured to acquire a third image acquired by using the camera in a case where a confirmation input of end of pose adjustment of the target user is received;
and the third image transmission module is used for transmitting the third image to a target application.
In the electronic device in the embodiment of the application, a first image collected by a camera is obtained, where the first image includes a facial image area of a target user, and when a matching degree between the first image and a display content of an interface is smaller than or equal to a target value, the first image is subjected to privacy processing, or prompt information is output, that is, by matching the first image actually collected by the camera with a content actually displayed by the first image on the interface of a target application, if the matching degree is smaller than or equal to the target value, the first image actually collected by the camera cannot be completely displayed on the interface of the target application, and a part of the first image that is not displayed on the interface of the target application may include privacy data of the user, so that there is a risk that the privacy data of the user is leaked and is unknown, and before the first image is transmitted to the target application, the part of the first image which is not displayed on the interface of the target application needs to be subjected to privacy processing, so that an image which does not contain user privacy data is obtained, or prompt information is output, the risk that the user privacy data are leaked due to the fact that the user privacy data exist in the part of the first image which is not displayed on the interface of the target application can be reduced, the leakage of the privacy data of the target user is fundamentally avoided, the effect of protecting the user privacy data is achieved, and the safety of the image data shot when the target application calls a camera to shoot is improved.
The electronic device in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The electronic device in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The electronic device provided in the embodiment of the present application can implement each process implemented in the image data processing method embodiments of fig. 1 to 5, and for avoiding repetition, details are not repeated here.
Optionally, as shown in fig. 8, an electronic device is further provided in an embodiment of the present application, and includes a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and executable on the processor 810, where the program or the instruction is executed by the processor 810 to implement each process of the above-mentioned embodiment of the image data processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic devices include, but are not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811.
Those skilled in the art will appreciate that the electronic device may further include a power supply 811 (e.g., a battery) for supplying power to various components, and the power supply 811 may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 810 is configured to acquire a first image acquired by using a camera, where the first image includes a face image region of a target user;
and when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, carrying out privacy processing on the first image or outputting prompt information.
In the embodiment of the application, by matching the first image actually acquired by the camera with the content of the first image actually displayed on the interface of the target application, if the matching degree is less than or equal to the target value, the first image actually acquired by the camera cannot be completely displayed on the interface of the target application, and a part of the first image not displayed on the interface of the target application may contain the private data of the user, so that there is a risk that the private data of the user is leaked and is not known by itself, and therefore before the first image is transmitted to the target application, it is necessary to perform privacy processing on the part of the first image not displayed on the interface of the target application, so as to obtain an image not containing the private data of the user, or output prompt information, so that the risk that the private data of the user is leaked due to the existence of the private data of the part of the first image not displayed on the interface of the target application can be reduced, therefore, the leakage of the privacy data of the target user is avoided fundamentally, the effect of protecting the privacy data of the user is achieved, and the safety of the image data shot by the target application when the camera is called to shoot is improved.
Optionally, the processor 810 is further configured to transmit a second image obtained based on the first image to a target application after performing privacy processing on the first image or outputting a prompt message, where the target application performs corresponding service processing based on the second image.
Optionally, the processor 810 is further configured to, when the matching degree between the first image and the display content of the interface is smaller than or equal to a target value, perform privacy processing on the first image, or acquire the display content of the interface before outputting the prompt information; and comparing the image content of the first image with the display content, and determining the matching degree of the first image and the display content.
Optionally, the first image further includes: an image region other than the target user's facial image region; the processor 810 is further configured to perform image replacement processing on a target image area in the other image areas; generating a second image of the target user based on the other image areas after the image replacement and the face image area of the target user; transmitting the second image to a target application.
Optionally, the processor 810 is further configured to determine a target image area in the other image areas, where the target image area includes: the other image area or an image area which is not displayed on the interface in the other image area; and performing image replacement processing on the target image area based on a preset image.
Optionally, the processor 810 is further configured to display the first image in the form of a floating window, so as to prompt the target user to adjust the pose based on the first image.
Optionally, the processor 810 is further configured to, after outputting the prompt information, obtain a third image acquired by using the camera under the condition that a confirmation input of the end of the pose adjustment of the target user is received; transmitting the third image to a target application.
In the electronic device in the embodiment of the application, a first image collected by a camera is obtained, where the first image includes a facial image area of a target user, and when a matching degree between the first image and a display content of an interface is smaller than or equal to a target value, the first image is subjected to privacy processing, or prompt information is output, that is, by matching the first image actually collected by the camera with a content actually displayed by the first image on the interface of a target application, if the matching degree is smaller than or equal to the target value, the first image actually collected by the camera cannot be completely displayed on the interface of the target application, and a part of the first image that is not displayed on the interface of the target application may include privacy data of the user, so that there is a risk that the privacy data of the user is leaked and is unknown, and before the first image is transmitted to the target application, the part of the first image which is not displayed on the interface of the target application needs to be subjected to privacy processing, so that an image which does not contain user privacy data is obtained, or prompt information is output, the risk that the user privacy data are leaked due to the fact that the user privacy data exist in the part of the first image which is not displayed on the interface of the target application can be reduced, the leakage of the privacy data of the target user is fundamentally avoided, the effect of protecting the user privacy data is achieved, and the safety of the image data shot when the target application calls a camera to shoot is improved.
It should be understood that, in the embodiment of the present application, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 802, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the electronic device (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The electronic device also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the electronic device is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 808 is an interface for connecting an external device to the electronic apparatus. For example, the external device may include a wired or wireless headset port, an external power supply 811 (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic equipment or may be used to transmit data between the electronic equipment and the external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby monitoring the whole electronic device. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate the application processor 810 and the modem processor 810, wherein the application processor 810 mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor 810 mainly handles wireless communications. It will be appreciated that modem processor 810, as described above, may not be integrated into processor 810.
The electronic device may also include a power supply 811 (such as a battery) for powering the various components, and preferably, the power supply 811 may be logically coupled to the processor 810 via a power supply 811 management system such that managing charging, discharging, and power consumption management functions are performed via the power supply 811 management system.
In addition, the electronic device includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an electronic device is further provided in an embodiment of the present application, and includes a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and executable on the processor 810, where the program or the instruction is executed by the processor 810 to implement each process of the above-mentioned embodiment of the image data processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by the processor 810, the program or the instruction implements each process of the embodiment of the image data processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor 810 is the processor 810 in the electronic device described in the above embodiments. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the image data processing method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A method of processing image data, the method comprising:
acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user;
and when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, carrying out privacy processing on the first image or outputting prompt information.
2. The method according to claim 1, further comprising, after privacy processing the first image or outputting a prompt message:
and transmitting a second image obtained based on the first image to a target application, wherein the target application performs corresponding business processing based on the second image.
3. The method according to claim 1, wherein before performing privacy processing on the first image or outputting prompt information when the matching degree of the first image and the display content of the interface is smaller than or equal to a target value, the method further comprises:
acquiring display content of an interface;
and comparing the image content of the first image with the display content, and determining the matching degree of the first image and the display content.
4. The method of claim 2, wherein the first image further comprises: an image region other than the target user's facial image region;
the privacy processing of the first image comprises:
carrying out image replacement processing on a target image area in the other image areas;
the transmitting a second image obtained based on the first image to a target application comprises:
generating a second image of the target user based on the other image areas after the image replacement and the face image area of the target user;
transmitting the second image to a target application.
5. The method according to claim 4, wherein the image replacement processing of the target image area in the other image areas comprises:
determining a target image area of the other image areas, the target image area comprising: the other image area or an image area which is not displayed on the interface in the other image area;
and performing image replacement processing on the target image area based on a preset image.
6. The method of claim 1, wherein outputting the prompt message comprises:
displaying the first image in a form of a floating window to prompt the target user to adjust a pose based on the first image.
7. The method of claim 6, wherein after outputting the prompt message, further comprising:
acquiring a third image acquired by using the camera under the condition of receiving confirmation input of finishing the pose adjustment of the target user;
transmitting the third image to a target application.
8. An electronic device, characterized in that the electronic device comprises:
the first image acquisition module is used for acquiring a first image acquired by a camera, wherein the first image comprises a face image area of a target user;
and the first image processing module is used for carrying out privacy processing on the first image or outputting prompt information under the condition that the matching degree of the first image and the display content of the interface is smaller than or equal to a target value.
9. The electronic device of claim 8, further comprising:
and the second image transmission module is used for transmitting a second image obtained based on the first image to a target application, and the target application performs corresponding business processing based on the second image.
10. The electronic device of claim 8, further comprising:
the interface content acquisition module is used for acquiring the display content of the interface;
and the image comparison module is used for comparing the image content of the first image with the display content and determining the matching degree of the first image and the display content.
11. The electronic device of claim 9, wherein the first image further comprises: an image region other than the target user's facial image region;
the first image processing module is specifically configured to:
carrying out image replacement processing on a target image area in the other image areas;
the second image transmission module is specifically configured to:
generating a second image of the target user based on the other image areas after the image replacement and the face image area of the target user;
transmitting the second image to a target application.
12. The electronic device of claim 11, wherein the first image processing module is further specifically configured to:
determining a target image area of the other image areas, the target image area comprising: the other image area or an image area which is not displayed on the interface in the other image area;
and performing image replacement processing on the target image area based on a preset image.
13. The electronic device of claim 8, wherein the first image processing module is further specifically configured to:
displaying the first image in a form of a floating window to prompt the target user to adjust a pose based on the first image.
14. The electronic device of claim 13, further comprising:
a third image acquisition module, configured to acquire a third image acquired by using the camera in a case where a confirmation input of end of pose adjustment of the target user is received;
and the third image transmission module is used for transmitting the third image to a target application.
15. An electronic device, comprising: processor, memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the method of processing image data according to any one of claims 1 to 7.
16. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the method of processing image data according to any one of claims 1 to 7.
CN202111265900.5A 2021-10-28 2021-10-28 Image data processing method and electronic equipment Pending CN113987602A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111265900.5A CN113987602A (en) 2021-10-28 2021-10-28 Image data processing method and electronic equipment
PCT/CN2022/127267 WO2023072038A1 (en) 2021-10-28 2022-10-25 Image data processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111265900.5A CN113987602A (en) 2021-10-28 2021-10-28 Image data processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN113987602A true CN113987602A (en) 2022-01-28

Family

ID=79743783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111265900.5A Pending CN113987602A (en) 2021-10-28 2021-10-28 Image data processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN113987602A (en)
WO (1) WO2023072038A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023072038A1 (en) * 2021-10-28 2023-05-04 维沃移动通信有限公司 Image data processing method and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4555589B2 (en) * 2004-03-31 2010-10-06 京セラ株式会社 Mobile device
CN103415839B (en) * 2012-09-19 2016-09-28 华为终端有限公司 Processing method of taking pictures and terminal unit
CN104125396B (en) * 2014-06-24 2018-06-08 小米科技有限责任公司 Image capturing method and device
CN109743503A (en) * 2019-01-17 2019-05-10 维沃移动通信有限公司 Reminding method and terminal
CN110719402B (en) * 2019-09-24 2021-07-06 维沃移动通信(杭州)有限公司 Image processing method and terminal equipment
CN112770049A (en) * 2020-12-30 2021-05-07 维沃移动通信有限公司 Shooting method, shooting device and storage medium
CN113987602A (en) * 2021-10-28 2022-01-28 维沃移动通信有限公司 Image data processing method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023072038A1 (en) * 2021-10-28 2023-05-04 维沃移动通信有限公司 Image data processing method and electronic device

Also Published As

Publication number Publication date
WO2023072038A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
CN110913132B (en) Object tracking method and electronic equipment
CN110719402B (en) Image processing method and terminal equipment
CN110109593B (en) Screen capturing method and terminal equipment
CN108595946B (en) Privacy protection method and terminal
CN109241775B (en) Privacy protection method and terminal
CN111163260B (en) Camera starting method and electronic equipment
CN108459788B (en) Picture display method and terminal
CN110149628B (en) Information processing method and terminal equipment
CN111125696B (en) Information prompting method and electronic equipment
CN109544172B (en) Display method and terminal equipment
CN111124231B (en) Picture generation method and electronic equipment
CN111125800B (en) Icon display method and electronic equipment
CN109815679B (en) Authority management method and mobile terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN109067979B (en) Prompting method and mobile terminal
CN111125680A (en) Permission setting method and terminal equipment
CN108109188B (en) Image processing method and mobile terminal
CN107809515B (en) Display control method and mobile terminal
CN111093033B (en) Information processing method and device
WO2023072038A1 (en) Image data processing method and electronic device
CN110795746B (en) Information processing method and electronic equipment
CN110677537B (en) Note information display method, note information sending method and electronic equipment
CN110929238B (en) Information processing method and device
CN110740265B (en) Image processing method and terminal equipment
CN109753776B (en) Information processing method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination