WO2023129055A1 - Reliable in-camera anonymization method for machine learning/deep learning - Google Patents
Reliable in-camera anonymization method for machine learning/deep learning Download PDFInfo
- Publication number
- WO2023129055A1 WO2023129055A1 PCT/TR2022/051615 TR2022051615W WO2023129055A1 WO 2023129055 A1 WO2023129055 A1 WO 2023129055A1 TR 2022051615 W TR2022051615 W TR 2022051615W WO 2023129055 A1 WO2023129055 A1 WO 2023129055A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- light
- digital
- sensor
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6227—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
- G06F21/6254—Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/82—Protecting input, output or interconnection devices
- G06F21/84—Protecting input, output or interconnection devices output devices, e.g. displays or monitors
Definitions
- the invention relates to a method for performing anonymization after receiving raw data in a camera and anonymizing the transmitted data when the data is transferred from the camera.
- Machine learning and deep learning methods which are frequently used in autonomous systems and related artificial intelligence applications, require large amounts of image data.
- This data can be collected from places where there is vehicle and human traffic in general, while information containing the privacy of other people or assets belonging to these people may be included in the collected data.
- it may be necessary to share the data with third parties. This situation causes violations in terms of personal data protection law due to the presence of personal privacy elements in the collected data.
- the degradation of computerized face detection includes receiving a source image including a representation of a face and calculating a perturbation for the source image.
- the perturbation is specific to the source image and is configured for a target face detector.
- a distorted image is then created by adding distortion to the source image and then the distorted image can be output instead of the source image.
- faces are found and the corresponding image regions are blurred.
- Anonymization is achieved both visually and digitally. No method is presented for license plates. Images or videos obtained with this method cannot be used for training machine learning algorithms afterward.
- the object of the present invention is to provide a method for performing the anonymization process after the raw data is received in the camera and to ensure that when data is transferred from the camera, the transferred data is anonymized.
- Another object of the present invention is the development of a method for identifying faces and license plates in the image taken from the camera and anonymizing them within the camera.
- the developed method can be implemented on a video camera/photo camera as well as on devices such as tablets and smartphones.
- the invention is a reliable in-camera anonymization method for the machine learning/deep learning in safe mode, and it comprises the following operation steps;
- a reliable in-camera method for the machine learning/deep learning in standard mode is developed, and it comprises the following operation steps;
- the mode indicator offers two different modes for two different methods.
- the main object is to inform the outside world which mode the designed camera is in. This way, people who are being photographed or videotaped will be able to see whether their photo or video was taken in a privacy-protected or conventional way.
- This indicator can give a light warning, or other types of warnings can be considered to convey this information.
- this indicator may emit green light. In standard operating mode, it may emit red light. These colors are chosen as representatives, it is possible to give warnings in different colors to show the difference between both modes.
- the difference between safe operation mode and standard operation mode can also be indicated by the flashing of the lights at different frequencies.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- General Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for performing anonymization after receiving raw data in a camera and anonymizing the transmitted data when the data is transferred from the camera.
Description
DESCRIPTION
RELIABLE IN-CAMERA ANONYMIZATION METHOD FOR MACHINE LEARNING/DEEP LEARNING
Technical Field
The invention relates to a method for performing anonymization after receiving raw data in a camera and anonymizing the transmitted data when the data is transferred from the camera.
Background
Machine learning and deep learning methods, which are frequently used in autonomous systems and related artificial intelligence applications, require large amounts of image data. This data can be collected from places where there is vehicle and human traffic in general, while information containing the privacy of other people or assets belonging to these people may be included in the collected data. In cases where it is necessary to develop artificial intelligence models based on this data, it may be necessary to share the data with third parties. This situation causes violations in terms of personal data protection law due to the presence of personal privacy elements in the collected data.
In the United States patent document numbered US2020097767, which is in the state of the art, systems, and methods for synthesizing and/or modifying features in images to limit recognition by classifier algorithms are mentioned. This method performs anonymization by overlaying noise or generating synthetic data on photographs that are imperceptible to the human eye, preventing the detection of faces by standard face detection algorithms. There is no visual anonymization, only digital anonymization so that algorithms cannot find the faces.
In the United States patent document numbered US2020089995, which is in the state of the art, computerized face detection is mentioned. The degradation of computerized face detection includes receiving a source image including a representation of a face and calculating a perturbation for the source image. The perturbation is specific to the source
image and is configured for a target face detector. A distorted image is then created by adding distortion to the source image and then the distorted image can be output instead of the source image. With this method, faces are found and the corresponding image regions are blurred. Anonymization is achieved both visually and digitally. No method is presented for license plates. Images or videos obtained with this method cannot be used for training machine learning algorithms afterward.
In the International patent document numbered W003049035, which is in the state of the art, an image processing system for automatic face or skin blurring is described. In the developed system, all faces or skin or specific faces can be blurred. This method uses face recognition and face-tracking algorithms to blur faces for all faces except one identified face. The face-tracking algorithm performs blurring on consecutive frames. In this way, the method continues to work correctly when the face detection algorithm cannot find faces. No solution is provided for license plates used on vehicles.
In the United States patent document numbered US2019147185, which is in the state of the art, a method for providing privacy protection in a received image, comprising obtaining image coordinates associated with one or more target persons in the received image. It anonymizes faces by superimposing a calculated noise signal on the input image in a way that face detection algorithms cannot. Similar to the aforementioned patent, the object is to prevent private information from being extracted from photos of people that can be found online by processing them by some algorithms. There is no visual anonymization. No solution is provided for license plates.
Therefore, there is a need to develop the method of the invention.
Objects and Brief Description of the Invention
The object of the present invention is to provide a method for performing the anonymization process after the raw data is received in the camera and to ensure that when data is transferred from the camera, the transferred data is anonymized.
Another object of the present invention is the development of a method for identifying faces and license plates in the image taken from the camera and anonymizing them within the camera.
Nowadays, violation of people’s privacy during outdoor shooting is a major issue. The developed method enables privacy-protected photo and video shooting during camera shooting with two different modes.
The developed method can be implemented on a video camera/photo camera as well as on devices such as tablets and smartphones.
When the system is running in safe mode, license plates and human faces are anonymized. Instead of anonymization by blurring/pixelization or anonymization by adding noise that is invisible to the human eye, the method is based on imitating the objects with machine learning methods.
Detailed Description of the Invention
The invention is a reliable in-camera anonymization method for the machine learning/deep learning in safe mode, and it comprises the following operation steps;
- falling of the light captured from a particular scene on the sensor in the camera by being refracted through optical systems such as lenses,
- generating the analog data related to the light falling on the sensor by capturing the light with the sensor, converting the analog data to digital by an analog-to-digital converter integrated, obtaining a digital image from digital data that can be processed by computer vision algorithms, evaluating the region of interest (ROI) of faces and license plates on the image as privacy elements in the image by detecting with binary classifiers artificial intelligence algorithms, estimating the objects found by regression and in the case of the face, calculating the pose information with the help of the reference 3D model, the attributes that can be found with artificial intelligence models from the face frame,
anonymizing license plates and faces synthetically as a result of the competition between two equivalent networks, generator, and discriminator, with the Generative Adversarial Networks method by using this pose information of the objects whose pose information is calculated, embedding anonymized new faces generated using machine learning-based methods into the original image by replacing them with Rol, compressing the video, deleting the raw data, sending the compressed video over the network.
In one embodiment of the invention, a reliable in-camera method for the machine learning/deep learning in standard mode is developed, and it comprises the following operation steps;
- falling of the light captured from a particular scene on the sensor in the camera by being refracted through optical systems such as lenses,
- generating the analog data related to the light falling on the sensor by capturing the light with the sensor, converting the analog data to digital by an analog-to-digital converter integrated, obtaining a digital image from digital data that can be processed by computer vision algorithms, compressing the video, sending the compressed video over the network.
The mode indicator offers two different modes for two different methods. The main object is to inform the outside world which mode the designed camera is in. This way, people who are being photographed or videotaped will be able to see whether their photo or video was taken in a privacy-protected or conventional way. This indicator can give a light warning, or other types of warnings can be considered to convey this information.
In safe operating mode, this indicator may emit green light. In standard operating mode, it may emit red light. These colors are chosen as representatives, it is possible to give warnings in different colors to show the difference between both modes. The difference
between safe operation mode and standard operation mode can also be indicated by the flashing of the lights at different frequencies.
Claims
1. The invention is a reliable in-camera anonymization method for the machine leaming/deep learning in safe mode, characterized in that it comprises the following operation steps;
- falling of the light captured from a particular scene on the sensor in the camera by being refracted through optical systems such as lenses,
- generating the analog data related to the light falling on the sensor by capturing the light with the sensor, converting the analog data to digital by an analog-to-digital converter integrated, obtaining a digital image from digital data that can be processed by computer vision algorithms, evaluating the region of interest (ROI) of faces and license plates on the image as privacy elements in the image by detecting with binary classifiers artificial intelligence algorithms, estimating the objects found by regression and in the case of the face, calculating the pose information with the help of the reference 3D model, the attributes that can be found with artificial intelligence models from the face frame, anonymizing license plates and faces synthetically as a result of the competition between two equivalent networks, generator, and discriminator, with the Generative Adversarial Networks method using this pose information of the objects whose pose information is calculated, embedding anonymized new faces generated using machine learning-based methods into the original image by replacing them with Rol, compressing the video, deleting the raw data, sending the compressed video over the network.
2. A method according to claim 1, characterized in that it comprises the step of providing a warning with a light indicating that the device is operating in a safe mode during the application of the method.
6
The invention is a reliable in-camera method for the machine leaming/deep learning in standard mode, characterized in that it comprises the following operation steps;
- falling of the light captured from a particular scene on the sensor in the camera by being refracted through optical systems such as lenses,
- generating the analog data related to the light falling on the sensor by capturing the light with the sensor, converting the analog data to digital by an analog-to-digital converter integrated, obtaining a digital image from digital data that can be processed by computer vision algorithms, compressing the video, sending the compressed video over the network. A method according to claim 3, characterized in that it comprises the step of providing a warning with a light indicating that the device is operating in standard mode during the application of the method.
7
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TR2021/021474 TR2021021474A1 (en) | 2021-12-28 | RELIABLE IN-CAMERA ANONYMIZATION METHOD FOR MACHINE LEARNING/DEEP LEARNING | |
TR2021021474 | 2021-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023129055A1 true WO2023129055A1 (en) | 2023-07-06 |
Family
ID=86999904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2022/051615 WO2023129055A1 (en) | 2021-12-28 | 2022-12-27 | Reliable in-camera anonymization method for machine learning/deep learning |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023129055A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3451209A1 (en) * | 2017-08-31 | 2019-03-06 | Nokia Technologies Oy | Apparatus and method for anonymizing image content |
CN111242837A (en) * | 2020-01-03 | 2020-06-05 | 杭州电子科技大学 | Face anonymous privacy protection method based on generation of countermeasure network |
WO2020260869A1 (en) * | 2019-06-24 | 2020-12-30 | The University Of Nottingham | Anonymization |
GB2596037A (en) * | 2020-02-21 | 2021-12-22 | Interactive Coventry Ltd | Data anonymisation |
-
2022
- 2022-12-27 WO PCT/TR2022/051615 patent/WO2023129055A1/en active Search and Examination
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3451209A1 (en) * | 2017-08-31 | 2019-03-06 | Nokia Technologies Oy | Apparatus and method for anonymizing image content |
WO2020260869A1 (en) * | 2019-06-24 | 2020-12-30 | The University Of Nottingham | Anonymization |
CN111242837A (en) * | 2020-01-03 | 2020-06-05 | 杭州电子科技大学 | Face anonymous privacy protection method based on generation of countermeasure network |
GB2596037A (en) * | 2020-02-21 | 2021-12-22 | Interactive Coventry Ltd | Data anonymisation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102123248B1 (en) | Real-time image processing system based on face recognition for protecting privacy | |
KR101215948B1 (en) | Image information masking method of monitoring system based on face recognition and body information | |
KR101641646B1 (en) | Video masking processing method and apparatus | |
US8675065B2 (en) | Video monitoring system | |
CN102045543B (en) | Image processing apparatus and image processing method | |
JP6627750B2 (en) | Image processing system, image processing apparatus, image processing method, and recording medium | |
CN107862658B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
JP2020057111A (en) | Facial expression determination system, program and facial expression determination method | |
CN101299269A (en) | Method and device for calibration of static scene | |
JP2017208616A (en) | Image processing apparatus, image processing method, and program | |
KR101084914B1 (en) | Indexing management system of vehicle-number and man-image | |
US20240046701A1 (en) | Image-based pose estimation and action detection method and apparatus | |
JP5088463B2 (en) | Monitoring system | |
CN108460319B (en) | Abnormal face detection method and device | |
WO2022044369A1 (en) | Machine learning device and image processing device | |
CN107578372B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
Kumar et al. | Border surveillance system using computer vision | |
CN109741224A (en) | Supervision method and Related product | |
WO2023129055A1 (en) | Reliable in-camera anonymization method for machine learning/deep learning | |
CN114529979A (en) | Human body posture identification system, human body posture identification method and non-transitory computer readable storage medium | |
EP4022504A1 (en) | Processing media data with sensitive information | |
CN107770446B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN111147815A (en) | Video monitoring system | |
KR102194511B1 (en) | Representative video frame determination system and method using same | |
TR2021021474A1 (en) | RELIABLE IN-CAMERA ANONYMIZATION METHOD FOR MACHINE LEARNING/DEEP LEARNING |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22917055 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) |