CN115223011A - Confrontation sample generation method and system for intelligent driving scene - Google Patents

Confrontation sample generation method and system for intelligent driving scene Download PDF

Info

Publication number
CN115223011A
CN115223011A CN202210797243.7A CN202210797243A CN115223011A CN 115223011 A CN115223011 A CN 115223011A CN 202210797243 A CN202210797243 A CN 202210797243A CN 115223011 A CN115223011 A CN 115223011A
Authority
CN
China
Prior art keywords
algorithm
attack
scene
sample
intelligent driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210797243.7A
Other languages
Chinese (zh)
Inventor
陈振威
朱纯志
郑立
石笑生
蔡刚强
钟志灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Intelligent Network Automobile Innovation Center Co ltd
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangdong Intelligent Network Automobile Innovation Center Co ltd
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Intelligent Network Automobile Innovation Center Co ltd, Guangzhou Automobile Group Co Ltd filed Critical Guangdong Intelligent Network Automobile Innovation Center Co ltd
Priority to CN202210797243.7A priority Critical patent/CN115223011A/en
Publication of CN115223011A publication Critical patent/CN115223011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for generating confrontation samples of an intelligent driving scene, which comprises the steps of receiving a picture to be attacked; determining a current intelligent driving scene suitable for the picture to be attacked, and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm; and according to the black box attack algorithm, attacking the picture to be attacked to generate a countercheck sample. The invention further provides a confrontation sample generation system for the intelligent driving scene. By implementing the method, the black box attack algorithm is adapted to make the sample closer to reality.

Description

Confrontation sample generation method and system for intelligent driving scene
Technical Field
The invention relates to the technical field of automobiles, in particular to a method and a system for generating confrontation samples of an intelligent driving scene.
Background
Currently, the research on confrontation samples is mainly focused on the academic world, and the related research in the industrial world is quite rare, especially the application in the field of intelligent driving is more exponential. Therefore, it is still far in the field of autopilot to investigate the defense against sample attacks and against samples.
In an intelligent driving scene, the processing task of the neural network system is more complex than that in other scenes, so that neural network models (such as a target detection model, a target classification model, a voice recognition model and the like) in the scene are key protection objects of various manufacturers. In view of the fact that in the intelligent driving model forming process, all manufacturers often highly encapsulate the whole path from the acquired data to the final output, and the risk that the outside cannot acquire the internal information is caused basically. Therefore, the white box attack in the actual scene has low feasibility on the intelligent driving model, and the black box attack is closer to reality.
However, in smart driving scenarios, a countersample generation method that adapts the black box attack algorithm has not been discovered so far.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide a method and a system for generating an confrontation sample of an intelligent driving scene, and the sample is closer to reality by adapting a black box attack algorithm.
In order to solve the above technical problem, an embodiment of the present invention provides a method for generating a countermeasure sample of an intelligent driving scenario, where the method includes the following steps:
receiving a picture to be attacked;
determining a current intelligent driving scene suitable for a picture to be attacked, and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and according to the black box attack algorithm, attacking the picture to be attacked to generate a countersample.
And if the current intelligent driving scene is an image classification scene, determining that the black box attack algorithm is an SIMBA algorithm.
And if the current intelligent driving scene is a target detection scene, determining that the black box attack algorithm is a BA algorithm or an HSJA algorithm.
And if the current intelligent driving scene is a language identification scene, determining that the black box attack algorithm is an APA algorithm.
Wherein the method further comprises:
s41, acquiring a corresponding neural network model according to the current intelligent driving scene; the acquired neural network model is one of an SSD model in a target detection scene, a DeepSpeech model in a voice recognition scene and a MobileNet model in an image classification scene;
s42, inputting the confrontation sample into the acquired neural network model to obtain a corresponding confrontation sample detection result, and carrying out scrambling processing on the confrontation sample detection result to form an attack sample for attacking the acquired neural network model;
s43, inputting the attack sample into the obtained neural network model to generate a confrontation sample of the current disturbance information change;
s44, if the result of the comparison between the confrontation sample changed by the current disturbance information and the preset attack result does not accord with the preset condition, scrambling the confrontation sample changed by the current disturbance information again to form an attack sample, and returning to the step S43;
and S45, if the comparison result of the countermeasure sample changed by the current disturbance information and the attack result meets the preset condition, outputting the countermeasure sample changed by the current disturbance information as a new countermeasure sample.
Wherein the method further comprises:
the acquired neural network model is trained using the new challenge samples.
The pictures to be attacked are manually uploaded by a user; or, the picture to be attacked is read from a specified directory.
The embodiment of the invention also provides a confrontation sample generation system for the intelligent driving scene, which comprises the following steps:
the picture receiving unit to be attacked is used for receiving the picture to be attacked;
the black box attack algorithm selecting unit is used for determining the current intelligent driving scene suitable for the picture to be attacked and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and the countermeasure sample generation unit is used for attacking the picture to be attacked according to the black box attack algorithm to generate a countermeasure sample.
If the current intelligent driving scene is an image classification scene, the determined black box attack algorithm is an SIMBA algorithm;
if the current intelligent driving scene is a target detection scene, determining that the black box attack algorithm is a BA algorithm or an HSJA algorithm; and the number of the first and second groups,
and if the current intelligent driving scene is a language identification scene, determining that the black box attack algorithm is an APA algorithm.
The pictures to be attacked are manually uploaded by a user; or, the picture to be attacked is read from a specified directory.
The embodiment of the invention has the following beneficial effects:
1. the method determines a corresponding black box attack algorithm based on the current intelligent driving scene suitable for the picture to be attacked, and attacks the picture to be attacked through the selected black box attack algorithm to generate a countersample, so that the sample is closer to reality;
2. the method provided by the invention is used for attacking the neural network model under the current intelligent driving scene based on the antagonistic sample generated by the black box attack algorithm so as to verify the antagonistic sample of the neural network model, and training the neural network model based on the verified effective antagonistic sample so as to improve the safety and robustness of the neural network model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a flowchart of a countermeasure sample generation method for an intelligent driving scenario according to an embodiment of the present invention;
fig. 2 is an architecture diagram of a pre-constructed countermeasure sample attack experimental platform in the countermeasure sample generation method for an intelligent driving scenario according to the embodiment of the present invention;
fig. 3 is a flowchart illustrating the validation of an countermeasure sample of a neural network model in an countermeasure sample generation method for an intelligent driving scenario according to an embodiment of the present invention;
fig. 4 is a logic block diagram of improving model robustness and safety by attacking a neural network model with an antagonistic sample in a target detection scene in the method for generating an antagonistic sample in an intelligent driving scene provided by the embodiment of the invention;
fig. 5 is a schematic structural diagram of a countermeasure sample generation system for an intelligent driving scenario according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, a method for generating a confrontation sample of an intelligent driving scenario provided in an embodiment of the present invention includes the following steps:
s1, receiving a picture to be attacked;
s2, determining a current intelligent driving scene to which the picture to be attacked is applicable, and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and S3, according to the black box attack algorithm, attacking the picture to be attacked to generate a countersample.
The specific process is that before step S1, a countersample attack experiment platform (as shown in fig. 2) may be constructed in advance to implement the countersample generation method for the intelligent driving scenario. The structure of the experimental platform for resisting the sample attack comprises the following four layers: the system comprises a basic component layer, a data calculation layer (also called a neural network model layer), a business processing layer and an experiment operation layer.
The basic component layer provides basic service and calculation support for system operation for the anti-sample attack platform, and comprises a hardware platform, an operating system and a software framework on the hardware platform and the operating system, which are depended by the platform operation; the neural network model layer provides a core algorithm for verifying the effectiveness of the confrontation sample by the user, and the core algorithm comprises various related neural network models and algorithm implementation of a system thereof; the business processing layer generates a confrontation sample for the user, and comprises a confrontation sample generation method, an implementation method and a business process which are adapted to the platform; the experimental operating layer is the portal of the platform through which challenge sample generation for a given target can be achieved.
In the step S1, the picture to be attacked is selected and uploaded to the platform, and the platform takes the selected picture to be attacked as a target to be attacked. The picture to be attacked supports two mode selections, namely manual uploading by a user or reading from a specified directory.
In step S2, firstly, determining a current intelligent driving scene suitable for a picture to be attacked; the intelligent driving scene comprises an image classification scene, a target detection scene and a language identification scene;
secondly, according to the current intelligent driving scene, determining a corresponding black box attack algorithm, which specifically comprises the following steps:
(1) When the image is classified into scenes, the determined black box attack algorithm is an SIMBA algorithm; it should be noted that the image classification scene may also employ a white-box algorithm, such as Fast Gradient descent FGSM (Fast Gradient signal Method) algorithm;
(2) When a target is detected in a scene, the determined black box Attack algorithm is a BA (Boundary attach) algorithm or an HSJA (HopSkipjump attach) algorithm;
(3) When the scene is identified by the language, the determined black box Attack algorithm is an APA (affinity Attack) algorithm.
In step S3, aiming at the picture to be attacked, the corresponding black box attack algorithm determined in step S2 is adopted, and the attack area is pointed out to attack, so that the platform can automatically generate the confrontation sample corresponding to the current intelligent driving scene.
In the embodiment of the present invention, the countermeasure sample obtained in step S3 may be used to attack the neural network model in the current intelligent driving scenario, and the effectiveness of the countermeasure sample of the neural network model is verified by resisting multiple rounds of attacks. Accordingly, the method further comprises:
s41, acquiring a corresponding neural network model according to the current intelligent driving scene; the acquired neural network model is one of an SSD model in a target detection scene, a DeepSpeech model in a voice recognition scene and a MobileNet model in an image classification scene;
s42, inputting the countermeasure sample into the acquired neural network model to obtain a corresponding countermeasure sample detection result, and scrambling the countermeasure sample detection result to form an attack sample for attacking the acquired neural network model;
s43, inputting the attack sample into the obtained neural network model to generate a confrontation sample of the current disturbance information change;
s44, if the result of the comparison between the countermeasures sample changed by the current disturbance information and the preset attack result does not accord with the preset condition, scrambling the countermeasures sample changed by the current disturbance information again to form an attack sample, and returning to the step S43;
and S45, if the result of comparison between the countermeasure sample changed by the current disturbance information and the attack result meets the preset condition, outputting the countermeasure sample changed by the current disturbance information as a new countermeasure sample.
It should be noted that the predetermined condition is a model convergence condition to judge the validity of the countermeasure sample. The condition may be preset to a threshold range, so that the comparison result between the challenge sample and the attack result is up to the threshold range.
In the embodiment of the invention, the neural network model can be trained based on the effective countermeasure sample after the verification, so that the safety and the robustness of the neural network model are improved. Accordingly, the method further comprises: and training the acquired neural network model by using the new confrontation sample to obtain the neural network model with better safety and robustness. It should be noted that the new confrontation sample and the original confrontation sample form a new data set of the neural network model, so that the neural network model has a certain capability of resisting the confrontation sample, thereby enhancing the robustness of the neural network model.
As shown in fig. 4, taking the countersample attack of the target detection scenario as an example, further explaining how to improve the model robustness by the countersample attack technology, the specific process is as follows:
a. selecting a picture to be attacked and uploading the picture to the platform, wherein the platform takes the selected picture to be attacked as a target to be attacked, namely an original sample;
b. aiming at a picture to be attacked, selecting a black box attack algorithm under a target detection scene, indicating an attack area, and generating a corresponding countermeasure sample;
c. the platform automatically inputs the generated confrontation sample into the SSD model and outputs a corresponding confrontation sample detection result;
d. the platform attacks the SSD model through the change disturbance of multiple rounds of circulation of the detection result, and compares the detection result with a preset attack result until the detection result is good enough to obtain a new confrontation sample which is used as an effective confrontation sample to be output;
e. the platform collects and stores the effective confrontation sample and adds the effective confrontation sample into an original training set to form a new data set containing the confrontation sample, and then the SSD model is trained by the new data set, so that the SSD model has certain capacity of resisting the confrontation sample, and the robustness and the safety of the SSD model are enhanced.
As shown in fig. 5, in an embodiment of the present invention, an opposing sample generation system for an intelligent driving scenario is provided, where the system is a logic function implementation unit of the foregoing opposing sample attack experiment platform, and specifically includes:
the picture receiving unit 110 to be attacked is used for receiving the picture to be attacked;
the black box attack algorithm selecting unit 120 is configured to determine a current intelligent driving scene to which the picture to be attacked is applicable, and determine a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and the confrontation sample generating unit 130 is configured to attack the picture to be attacked according to the black box attack algorithm to generate a confrontation sample.
If the current intelligent driving scene is an image classification scene, the determined black box attack algorithm is an SIMBA algorithm;
if the current intelligent driving scene is a target detection scene, determining that the black box attack algorithm is a BA algorithm or an HSJA algorithm; and (c) a second step of,
and if the current intelligent driving scene is a language identification scene, determining that the black box attack algorithm is an APA algorithm.
The pictures to be attacked are manually uploaded by a user; or, the picture to be attacked is read from a specified directory.
The embodiment of the invention has the following beneficial effects:
1. the method determines the corresponding black box attack algorithm based on the current intelligent driving scene to which the picture to be attacked is applicable, and attacks the picture to be attacked through the selected black box attack algorithm to generate the countercheck sample, so that the sample is closer to reality;
2. according to the method, the antagonistic sample generated based on the black box attack algorithm attacks the neural network model in the current intelligent driving scene so as to verify the antagonistic sample of the neural network model, and the neural network model is trained based on the effective antagonistic sample after verification so as to improve the safety and robustness of the neural network model.
It should be noted that, in the foregoing system embodiment, each included system unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by using a program to instruct related hardware, and the program may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A confrontation sample generation method for an intelligent driving scene is characterized by comprising the following steps:
receiving a picture to be attacked;
determining a current intelligent driving scene suitable for the picture to be attacked, and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and according to the black box attack algorithm, attacking the picture to be attacked to generate a countersample.
2. The method for generating the confrontational sample of the smart driving scene as claimed in claim 1, wherein if the current smart driving scene is an image classification scene, the determined black box attack algorithm is a SIMBA algorithm.
3. The method for generating countermeasure samples for smart driving scenario according to claim 1, wherein if the current smart driving scenario is a target detection scenario, the determined black box attack algorithm is BA algorithm or HSJA algorithm.
4. The method as claimed in claim 1, wherein if the current smart driving scenario is a speech recognition scenario, the determined black box attack algorithm is an APA algorithm.
5. The confrontational sample generation method for a smart driving scenario of claim 1, wherein the method further comprises:
s41, acquiring a corresponding neural network model according to the current intelligent driving scene; the acquired neural network model is one of an SSD model in a target detection scene, a DeepSpeech model in a voice recognition scene and a MobileNet model in an image classification scene;
s42, inputting the confrontation sample into the obtained neural network model to obtain a corresponding confrontation sample detection result, and carrying out scrambling processing on the confrontation sample detection result to form an attack sample for attacking the obtained neural network model;
s43, inputting the attack sample into the obtained neural network model to generate a confrontation sample of the current disturbance information change;
s44, if the result of the comparison between the confrontation sample changed by the current disturbance information and the preset attack result does not accord with the preset condition, scrambling the confrontation sample changed by the current disturbance information again to form an attack sample, and returning to the step S43;
and S45, if the comparison result of the countermeasure sample changed by the current disturbance information and the attack result meets the preset condition, outputting the countermeasure sample changed by the current disturbance information as a new countermeasure sample.
6. The confrontational sample generation method for a smart driving scenario of claim 5, wherein the method further comprises:
the acquired neural network model is trained using the new challenge samples.
7. The method for generating the confrontation sample of the intelligent driving scene as claimed in claim 1, wherein the picture to be attacked is manually uploaded by a user; or, the picture to be attacked is read from a specified directory.
8. A confrontational sample generation system for an intelligent driving scenario, comprising:
the picture receiving unit to be attacked is used for receiving the picture to be attacked;
the black box attack algorithm selecting unit is used for determining the current intelligent driving scene suitable for the picture to be attacked and determining a corresponding black box attack algorithm according to the current intelligent driving scene; the current intelligent driving scene is one of an image classification scene, a target detection scene and a language identification scene; the black box attack algorithm is one of a significance detection attack SIMBA algorithm, a boundary attack BA algorithm, a jump attack HSJA algorithm and an affine projection attack APA algorithm;
and the countermeasure sample generation unit is used for attacking the picture to be attacked according to the black box attack algorithm to generate a countermeasure sample.
9. The system for generating antagonistic samples of intelligent driving scenes as claimed in claim 8, characterized in that if the current intelligent driving scene is an image classification scene, the determined black box attack algorithm is the SIMBA algorithm;
if the current intelligent driving scene is a target detection scene, determining that the black box attack algorithm is a BA algorithm or an HSJA algorithm; and (c) a second step of,
and if the current intelligent driving scene is a language identification scene, determining that the black box attack algorithm is an APA algorithm.
10. The confrontational sample generation system for intelligent driving scenario of claim 9, wherein the picture to be attacked comes from user manual upload; or, the picture to be attacked is read from a specified directory.
CN202210797243.7A 2022-07-08 2022-07-08 Confrontation sample generation method and system for intelligent driving scene Pending CN115223011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210797243.7A CN115223011A (en) 2022-07-08 2022-07-08 Confrontation sample generation method and system for intelligent driving scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210797243.7A CN115223011A (en) 2022-07-08 2022-07-08 Confrontation sample generation method and system for intelligent driving scene

Publications (1)

Publication Number Publication Date
CN115223011A true CN115223011A (en) 2022-10-21

Family

ID=83609022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210797243.7A Pending CN115223011A (en) 2022-07-08 2022-07-08 Confrontation sample generation method and system for intelligent driving scene

Country Status (1)

Country Link
CN (1) CN115223011A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440891B1 (en) * 1997-03-06 2008-10-21 Asahi Kasei Kabushiki Kaisha Speech processing method and apparatus for improving speech quality and speech recognition performance
CN109902018A (en) * 2019-03-08 2019-06-18 同济大学 A kind of acquisition methods of intelligent driving system test cases
CN111177757A (en) * 2019-12-27 2020-05-19 支付宝(杭州)信息技术有限公司 Processing method and device for protecting privacy information in picture
CN111401407A (en) * 2020-02-25 2020-07-10 浙江工业大学 Countermeasure sample defense method based on feature remapping and application
CN112349281A (en) * 2020-10-28 2021-02-09 浙江工业大学 Defense method of voice recognition model based on StarGAN
CN113515774A (en) * 2021-04-23 2021-10-19 北京航空航天大学 Privacy protection method for generating countermeasure sample based on projection gradient descent method
CN113571067A (en) * 2021-06-21 2021-10-29 浙江工业大学 Voiceprint recognition countermeasure sample generation method based on boundary attack
CN113704758A (en) * 2021-07-29 2021-11-26 西安交通大学 Black box attack counterattack sample generation method and system
CN114664313A (en) * 2022-03-01 2022-06-24 游密科技(深圳)有限公司 Speech recognition method, apparatus, computer device, storage medium and program product

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440891B1 (en) * 1997-03-06 2008-10-21 Asahi Kasei Kabushiki Kaisha Speech processing method and apparatus for improving speech quality and speech recognition performance
CN109902018A (en) * 2019-03-08 2019-06-18 同济大学 A kind of acquisition methods of intelligent driving system test cases
CN111177757A (en) * 2019-12-27 2020-05-19 支付宝(杭州)信息技术有限公司 Processing method and device for protecting privacy information in picture
CN111401407A (en) * 2020-02-25 2020-07-10 浙江工业大学 Countermeasure sample defense method based on feature remapping and application
CN112349281A (en) * 2020-10-28 2021-02-09 浙江工业大学 Defense method of voice recognition model based on StarGAN
CN113515774A (en) * 2021-04-23 2021-10-19 北京航空航天大学 Privacy protection method for generating countermeasure sample based on projection gradient descent method
CN113571067A (en) * 2021-06-21 2021-10-29 浙江工业大学 Voiceprint recognition countermeasure sample generation method based on boundary attack
CN113704758A (en) * 2021-07-29 2021-11-26 西安交通大学 Black box attack counterattack sample generation method and system
CN114664313A (en) * 2022-03-01 2022-06-24 游密科技(深圳)有限公司 Speech recognition method, apparatus, computer device, storage medium and program product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUICHEN LI 等: "QEBA: Query-Efficient Boundary-Based Blackbox Attack", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 1218 - 1227 *
K. NAVEEN KUMAR 等: "Black-box Adversarial Attacks in Autonomous Vehicle Technology", 《2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP》, pages 1 - 7 *

Similar Documents

Publication Publication Date Title
CN110851835A (en) Image model detection method and device, electronic equipment and storage medium
CN111598182B (en) Method, device, equipment and medium for training neural network and image recognition
WO2020259128A1 (en) Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN111476268A (en) Method, device, equipment and medium for training reproduction recognition model and image recognition
CN110348475A (en) It is a kind of based on spatial alternation to resisting sample Enhancement Method and model
CN111626367A (en) Countermeasure sample detection method, apparatus, device and computer readable storage medium
CN110049309B (en) Method and device for detecting stability of image frame in video stream
CN111476269B (en) Balanced sample set construction and image reproduction identification method, device, equipment and medium
CN114220097A (en) Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system
CN115168210A (en) Robust watermark forgetting verification method based on confrontation samples in black box scene in federated learning
CN111241873A (en) Image reproduction detection method, training method of model thereof, payment method and payment device
CN114140670B (en) Method and device for verifying ownership of model based on exogenous characteristics
CN114758113A (en) Confrontation sample defense training method, classification prediction method and device, and electronic equipment
CN106355066A (en) Face authentication method and face authentication device
CN116189063B (en) Key frame optimization method and device for intelligent video monitoring
CN110351094B (en) Character verification method, device, computer equipment and storage medium
CN112084936A (en) Face image preprocessing method, device, equipment and storage medium
CN115223011A (en) Confrontation sample generation method and system for intelligent driving scene
CN115828848A (en) Font generation model training method, device, equipment and storage medium
CN114241253A (en) Model training method, system, server and storage medium for illegal content identification
CN112884069A (en) Method for detecting confrontation network sample
CN113742775A (en) Image data security detection method, system and storage medium
CN110751197A (en) Picture classification method, picture model training method and equipment
CN111080586A (en) Method for obtaining evidence of tampered image source based on convolutional neural network
CN116488943B (en) Multimedia data leakage tracing detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination