CN117235820A - Chip self-destruction circuit and destruction method based on same - Google Patents

Chip self-destruction circuit and destruction method based on same Download PDF

Info

Publication number
CN117235820A
CN117235820A CN202311249335.2A CN202311249335A CN117235820A CN 117235820 A CN117235820 A CN 117235820A CN 202311249335 A CN202311249335 A CN 202311249335A CN 117235820 A CN117235820 A CN 117235820A
Authority
CN
China
Prior art keywords
chip
texture
destruction
area
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311249335.2A
Other languages
Chinese (zh)
Inventor
田芳
张寒英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Weichu Information Technology Co ltd
Original Assignee
Hunan Weichu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Weichu Information Technology Co ltd filed Critical Hunan Weichu Information Technology Co ltd
Priority to CN202311249335.2A priority Critical patent/CN117235820A/en
Publication of CN117235820A publication Critical patent/CN117235820A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A self-destroying circuit for chip and a destroying method based on the self-destroying circuit are disclosed. Firstly, acquiring a chip destruction area image acquired by a camera after a chip self-destruction circuit test is started, then, extracting texture features of the chip destruction area image to obtain a chip destruction area texture shallow layer full-perception feature vector and a chip destruction area texture deep layer full-perception feature vector, then, fusing the chip destruction area texture shallow layer full-perception feature vector and the chip destruction area texture deep layer full-perception feature vector to obtain a multi-scale chip destruction area texture feature vector, and finally, determining whether to completely destroy the chip based on the multi-scale chip destruction area texture feature vector. In this way, the chip destruction area can be automatically analyzed to determine whether the chip is completely destroyed.

Description

Chip self-destruction circuit and destruction method based on same
Technical Field
The present disclosure relates to the field of chips, and more particularly, to a chip self-destruction circuit and a destruction method based on the chip self-destruction circuit.
Background
With the application and popularization of electronic products, chip technology is also being developed vigorously. The chip internally stores a lot of important high-tech technical information, such as a magnetic card chip of a bank, a memory chip of a memory device, and the like. The chip has extremely high requirements on information security, and the chip is usually authorized to carry out security verification when in use, so that when the chip is illegally invaded by a malicious attacker or information is required to be abandoned, a destruction device is required to be started to destroy the chip, and the information security is ensured.
The chip self-destruction circuit is a circuit capable of automatically or remotely triggering physical destruction of the chip under specific conditions so as to prevent sensitive information in the chip from being illegally acquired or utilized. In the actual test process of the chip destruction circuit, in order to ensure that the chip can be completely destroyed, the effect of the chip self-destruction circuit needs to be effectively detected and evaluated. However, current detection methods rely mainly on manual observation and analysis, lack objectivity and accuracy, and are inefficient. Thus, an optimized solution is desired.
Disclosure of Invention
In view of this, the disclosure provides a chip self-destruction circuit and a destruction method based on the chip self-destruction circuit, which can automatically analyze a chip destruction area to determine whether a chip is completely destroyed.
According to an aspect of the present disclosure, there is provided a destruction method based on a chip self-destruction circuit, including:
acquiring a chip destruction area image acquired by a camera after the chip self-destruction circuit test is started;
extracting texture features of the image of the chip destroying area to obtain a shallow texture full-perception feature vector of the chip destroying area and a deep texture full-perception feature vector of the chip destroying area;
fusing the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area to obtain a texture feature vector of the multi-scale chip destruction area; and determining whether to completely destroy the chip based on the texture feature vector of the multi-scale chip destruction area.
According to another aspect of the present disclosure, there is provided a chip self-destruction circuit that operates in a destruction method based on the chip self-destruction circuit as described above.
According to the embodiment of the disclosure, firstly, a chip destroying area image acquired by a camera after a chip self-pinning circuit test is started is acquired, then, texture feature extraction is carried out on the chip destroying area image to obtain a chip destroying area texture shallow layer full-perception feature vector and a chip destroying area texture deep layer full-perception feature vector, then, the chip destroying area texture shallow layer full-perception feature vector and the chip destroying area texture deep layer full-perception feature vector are fused to obtain a multi-scale chip destroying area texture feature vector, and finally, whether complete destroying is carried out is determined based on the multi-scale chip destroying area texture feature vector. In this way, the chip destruction area can be automatically analyzed to determine whether the chip is completely destroyed.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a method of destroying a chip-based self-destruction circuit according to an embodiment of the present disclosure.
Fig. 2 shows an architecture schematic of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of substep S120 of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of substep S140 of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a destruction system based on a chip self-destruction circuit in accordance with an embodiment of the present disclosure.
Fig. 6 shows an application scenario diagram of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Aiming at the technical problems, the technical conception of the present disclosure is that the chip destruction area can be automatically analyzed by using an image processing technology and an artificial intelligence technology based on deep learning so as to judge whether the chip is completely destroyed.
Based on this, fig. 1 shows a flowchart of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure. Fig. 2 shows an architecture schematic of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure. As shown in fig. 1 and 2, a destroying method based on a chip self-destruction circuit according to an embodiment of the disclosure includes the steps of: s110, acquiring a chip destruction area image acquired by a camera after the chip self-destruction circuit test is started; s120, extracting texture features of the image of the chip destroying area to obtain a shallow texture full-perception feature vector of the chip destroying area and a deep texture full-perception feature vector of the chip destroying area; s130, fusing the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area to obtain a texture feature vector of the multi-scale chip destruction area; and S140, determining whether to completely destroy the chip based on the texture feature vector of the multi-scale chip destruction area. It should be understood that the purpose of step S110 is to obtain, by using the camera, an image of the chip destruction area, and after the chip self-destruction circuit test is started, the camera may capture the situation of chip destruction. The purpose of the step S120 is to extract texture features from the image of the chip destruction area, and by extracting the texture features, shallow full-sensing feature vectors and deep full-sensing feature vectors of the chip destruction area can be obtained, and these feature vectors can be used for subsequent analysis and processing. The purpose of the step S130 is to fuse the texture shallow full-perception feature vector and the texture deep full-perception feature vector of the chip destruction area to obtain a multi-scale texture feature vector of the chip destruction area, where the multi-scale feature vector can provide more comprehensive and accurate information for subsequent destruction judgment. The purpose of step S140 is to determine whether the chip is completely destroyed based on the texture feature vector of the multi-scale chip destruction area, and by analyzing and comparing the feature vector, the degree and integrity of chip destruction can be determined, and according to the determination result, corresponding measures can be taken, such as further destruction or confirmation of the completion of destruction. In other words, the destroying method based on the chip self-destroying circuit judges the destroying condition of the chip by utilizing the steps of image acquisition, texture feature extraction, feature fusion and the like and utilizing the computer vision and image processing technology, and the method can help ensure the complete destroying of the chip, thereby protecting the safety of sensitive information.
More specifically, in the technical scheme of the present disclosure, first, a chip destruction area image after a chip self-destruction circuit test is started, which is acquired by a camera, is acquired. And then, extracting texture features of the image of the chip destroying area to obtain a shallow texture full-perception feature vector of the chip destroying area and a deep texture full-perception feature vector of the chip destroying area. It should be appreciated that the texture features in the image of the chip destruction area may provide information about the shape and structure, reflecting the physical state of the chip, such as cracks, breaks.
In a specific example of the present disclosure, the encoding process for extracting texture features of the image of the chip destruction area to obtain a shallow full-perception feature vector of the texture of the chip destruction area and a deep full-perception feature vector of the texture of the chip destruction area includes: firstly calculating an HOG image of the chip destruction area image to obtain the chip destruction area HOG image; then, the HOG image of the chip destruction area passes through an image texture feature extractor based on a pyramid network to obtain a texture shallow feature map of the chip destruction area and a texture deep feature map of the chip destruction area; and then the texture shallow feature map of the chip destruction area and the texture deep feature map of the chip destruction area are respectively processed through a feature full-perception module based on a full-connection layer to obtain texture shallow full-perception feature vectors of the chip destruction area and texture deep full-perception feature vectors of the chip destruction area.
Accordingly, as shown in fig. 3, the extracting of texture features from the image of the chip destruction area to obtain a shallow full-perception feature vector of texture in the chip destruction area and a deep full-perception feature vector of texture in the chip destruction area includes: s121, calculating an HOG image of the chip destruction area image to obtain a chip destruction area HOG image; s122, enabling the HOG image of the chip destruction area to pass through an image texture feature extractor based on a pyramid network to obtain a shallow texture feature map of the chip destruction area and a deep texture feature map of the chip destruction area; and S123, respectively obtaining the shallow texture feature map of the chip destruction area and the deep texture feature map of the chip destruction area through a full-perception feature module based on a full-connection layer to obtain a full-perception feature vector of the shallow texture of the chip destruction area and a deep full-perception feature vector of the texture of the chip destruction area. It should be understood that the purpose of step S121 is to calculate a histogram of the gradients of the direction gradient (Histogram of Oriented Gradients, HOG) image of the chip destruction area image, and the HOG image can extract edge and texture information in the image for subsequent texture feature extraction. The purpose of the step S122 is to process the HOG image of the chip destruction area by using an image texture feature extractor based on a pyramid network, so as to obtain a texture shallow feature map and a texture deep feature map, wherein the feature maps can capture texture information of the chip destruction area for subsequent feature fusion and full perception feature extraction. The step S123 is aimed at processing the texture shallow layer feature map and the texture deep layer feature map through a feature full-perception module based on a full-connection layer respectively to obtain a texture shallow layer full-perception feature vector and a texture deep layer full-perception feature vector of a chip destruction area, and the full-connection layer can synthesize and abstract features to obtain a feature vector with more expressive force. In other words, these steps are used for extracting texture features of the image of the chip destruction area, and by calculating the HOG image, using the pyramid network-based image texture feature extractor and the full-connection layer-based feature full-sensing module, a texture shallow full-sensing feature vector and a texture deep full-sensing feature vector can be obtained, and these feature vectors will be used for subsequent feature fusion and destruction determination to determine whether the chip is completely destroyed.
It should be noted that HOG (Histogram of Oriented Gradients) image is a method for image feature extraction by calculating gradient information in different directions in the image and converting it into a histogram representation to capture the edges and texture features of the image. The HOG image is calculated as follows: 1. converting the image to a gray scale image for better processing of edge and texture information; 2. the magnitude and direction of the gradient is calculated for each pixel in the image, and a common method is to use a Sobel operator or other gradient calculation algorithm; 3. dividing the image into small local areas (cells), typically square or rectangular areas; 4. for each local region, counting the gradient direction histograms of pixels in the region, wherein each column of the histogram represents the number of gradient directions in a certain range; 5. combining several adjacent local areas (cells) into larger blocks (blocks), and normalizing the histograms in each block to enhance the robustness to illumination changes; 6. the histograms of all blocks are concatenated to form the final HOG feature vector. Features extracted from the HOG image mainly comprise edge and texture information of the image, and the features play an important role in computer vision tasks such as target detection, pedestrian recognition and the like, and local features in the image can be converted into global feature vectors for subsequent classification, recognition or other tasks through calculating the HOG image.
It is worth mentioning that a Pyramid Network (Pyramid Network) is a Network structure for image processing and computer vision tasks, which is mainly used for processing features and information of different scales. Pyramids in a pyramid network refer to multi-scale feature representations, typically, the pyramid network is made up of multiple branches, each of which processes input data of a different scale, each of which contains a series of convolution layers, pooling layers, or other operations for extracting features of a different scale. The main idea of the pyramid network is to capture multi-scale information in an image by processing features of different scales. In the image processing task, features of different scales are very important for tasks such as detection, identification and positioning. For example, in a target detection task, objects may appear at different scales, so using a pyramid network, multi-scale features may be extracted, thereby enhancing the perceptibility of the model to objects of different scales. The pyramid network may take different structures and designs, such as a network structure with multiple parallel branches, or a structure with multiple scale feature map cascades. The design can effectively extract multi-scale features and perform feature fusion so as to enhance the expression capability and performance of the model. In summary, the pyramid network is a network structure for processing multi-scale features, and by processing input data of different scales, multi-scale information in an image can be captured, so that accuracy and robustness of a computer vision task are improved.
It is worth mentioning that the fully connected layer (Fully Connected Layer), also called dense connected layer or affine layer, is a common type of layer in neural networks, where each neuron is connected to all neurons of the previous layer. In the fully connected layer, each input neuron is connected with each output neuron to form a fully connected network structure. Each connection has a weight for adjusting the importance of the input signal. In addition, each output neuron has a bias term (bias) for adjusting the activation threshold of the neuron. The calculation process of the full connection layer is as follows: 1. flattening the input data into a one-dimensional vector, e.g., if the input is a two-dimensional image, the fully connected layer will expand it into a vector; 2. calculating the input signal of each output neuron, namely performing dot product on the input vector and the weight, and adding a bias term; 3. and carrying out nonlinear transformation on the input signal through an activation function to obtain an output value. The fully connected layer is typically used for the last few layers of the neural network to map the high-level feature representation to the final output class or value. In convolutional neural networks (Convolutional Neural Network, CNN), one or more pooling layers and batch normalization layers are typically added between the convolutional layer and the fully-connected layer to extract and compress features and reduce the number of parameters. The main advantage of the fully connected layer is the ability to learn complex relationships and non-linear patterns in the input data. However, the number of parameters of the fully connected layer is large, over fitting is easily caused, and the calculation amount is large. Thus, in some tasks, other layer types (such as convolutional layers or loop layers) may be employed to reduce the number of parameters and computational complexity.
And then, fusing the shallow full-perception feature vector of the texture of the chip destroying area and the deep full-perception feature vector of the texture of the chip destroying area by using principal component analysis to obtain a texture feature vector of the multi-scale chip destroying area. The feature distribution dimension can be reduced by fusing the principal component analysis, meanwhile, the loss of important information is avoided, and the feature which has the greatest contribution to the target result in the feature distribution is reserved.
Accordingly, in step S130, fusing the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area to obtain a texture feature vector of the multi-scale chip destruction area, including: and fusing the shallow full-perception feature vector of the texture of the chip destroying area and the deep full-perception feature vector of the texture of the chip destroying area by using principal component analysis to obtain the texture feature vector of the multi-scale chip destroying area. It is worth mentioning that principal component analysis (Principal Component Analysis, PCA) is a common technique of data reduction and feature extraction that can transform raw data into a new set of features, called principal components, with the greatest variance. The objective of principal component analysis is to project the original data into a new coordinate system by linear transformation so that the projected data has the greatest variance, which has the advantage of reducing the dimension of the data while retaining as much information as possible, and by selecting the number of principal components retained, the dimension reduction of the data can be achieved, thereby reducing the storage space and computation cost, and removing noise and redundant information in the data. In the process of fusing the shallow full-perception feature vector of the texture of the chip destroying area and the deep full-perception feature vector of the texture of the chip destroying area, the two feature vectors can be fused by using principal component analysis, so as to obtain the texture feature vector of the multi-scale chip destroying area, and the principal component analysis can be realized by calculating covariance matrix and eigenvalue decomposition. Through principal component analysis and fusion, feature vectors with different scales can be integrated, so that more comprehensive and rich feature representations are obtained, understanding and distinguishing capability of a model on textures of a chip destruction area are improved, and accuracy and robustness of chip destruction judgment are improved. In other words, principal component analysis is a technology for data dimension reduction and feature extraction, and is realized by calculating principal components of data, and when texture features of a chip destruction area are fused, feature vectors of different scales can be fused into multi-scale feature representation by using the principal component analysis, so that the performance and effect of a model are improved.
Further, the texture feature vector of the multi-scale chip destruction area is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the multi-scale chip destruction area is completely destroyed or not. If the classification result indicates that the chip is not completely destroyed, in an embodiment of the present disclosure, the following measures may be taken: (1) The triggering intensity or duration of the self-destruction circuit of the chip is increased to enlarge the physical damage of the chip; (2) The coverage range or the distribution density of the self-destruction circuit of the chip is increased to enlarge the destruction area of the chip; (3) In combination with other means such as encryption, erasure or overwriting, etc., to reduce the readability of the information in the chip.
Accordingly, as shown in fig. 4, determining whether to completely destroy the chip based on the texture feature vector of the multi-scale chip destruction area includes: s141, performing feature distribution gain on the texture feature vector of the multi-scale chip destruction area to obtain an optimized texture feature vector of the multi-scale chip destruction area; and S142, enabling the texture feature vector of the optimized multi-scale chip destruction area to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the multi-scale chip destruction area is completely destroyed or not. It should be understood that in the process of determining whether to completely destroy based on the texture feature vector of the multi-scale chip destruction area, two steps are included: s141 and S142. In step S141, feature distribution gains are performed on texture feature vectors in the multi-scale chip destruction area, so that the feature vectors can be further optimized. The feature distribution gain is a method for adjusting feature vector distribution, which can improve the difference between features and enhance the distinguishing capability of the features, and through the step, the texture feature vector of the optimized multi-scale chip destroying area can be more suitable for the subsequent classification task. In the step S142, the texture feature vector of the optimized multi-scale chip destruction area is input into a classifier, and classified by the classifier to obtain a classification result. The classifier may be various machine learning algorithms or deep learning models, such as Support Vector Machines (SVMs), decision trees, random forests, convolutional Neural Networks (CNNs), and the like. The classification result indicates a judgment as to whether the chip is completely destroyed. Through the combination of the two steps, the texture feature vector of the optimized multi-scale chip destroying area can be used for classifying to judge whether the chip is completely destroyed, and the method can be used for judging based on the texture feature of the chip destroying area, so that the safety of sensitive information is protected.
In the technical scheme of the disclosure, the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area respectively express shallow and deep image semantic features of the HOG image of the chip destruction area under the representation of different scale features based on a pyramid network, so that when the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area are fused by using principal component analysis, the feature distribution of the image semantic features under different depths is not aligned while the dimensionality is reduced, thereby introducing background distribution noise related to interference caused by the feature distribution of the semantic features of a source image while the dimensionality is reduced and fused, and the multi-scale texture feature vector of the chip destruction area itself also has the cross-scale and cross-depth hierarchical image semantic spatial feature expression, so that the expression effect of the multi-scale texture feature vector of the chip destruction area is expected to be enhanced based on the distribution characteristic of the texture feature vector of the multi-scale chip destruction area. Therefore, the applicant of the present disclosure performs a distribution gain based on a probability density feature simulation paradigm on the multi-scale chip destruction area texture feature vector.
Accordingly, in a specific example, performing feature distribution gain on the texture feature vector of the multi-scale chip destruction area to obtain an optimized texture feature vector of the multi-scale chip destruction area, including: performing feature distribution gain on the texture feature vector of the multi-scale chip destruction area by using the following optimization formula to obtain the texture feature vector of the optimized multi-scale chip destruction area; wherein, the optimization formula is:
wherein V is the texture feature vector of the multi-scale chip destruction area, L is the length of the texture feature vector of the multi-scale chip destruction area, and V is the length of the texture feature vector of the multi-scale chip destruction area i Is the multi-scale chip destruction areaThe feature value of the ith position of the texture feature vector,representing the square of the two norms of the texture feature vector of the multi-scale chip destruction area, and alpha is a weighted hyper-parameter, exp (-) represents the exponential operation of a numerical value, the exponential operation of the numerical value represents the calculation of a natural exponential function value exponentiated by the numerical value, v' i Is the characteristic value of the ith position of the texture characteristic vector of the optimized multi-scale chip destruction area.
Here, based on the characteristic simulation paradigm of the standard cauchy distribution on the probability density for the natural gaussian distribution, the distribution gain based on the probability density characteristic simulation paradigm can use the characteristic scale as a simulation mask to distinguish foreground object characteristics and background distribution noise in a high-dimensional characteristic space, so that the unconstrained distribution gain of the high-dimensional characteristic distribution is obtained by carrying out the associated semantic cognition distribution soft matching of the characteristic space mapping on the high-dimensional space based on the image semantic space classification of the high-dimensional characteristics, the expression effect of the texture characteristic vector V of the multi-scale chip destruction area based on the characteristic distribution characteristic is improved, and the accuracy of the classification result obtained by the classifier of the texture characteristic vector of the multi-scale chip destruction area is also improved.
More specifically, in step S142, the texture feature vector of the optimized multi-scale chip destruction area is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the destruction is complete, and the method includes: performing full-connection coding on texture feature vectors of the optimized multi-scale chip destruction area by using a full-connection layer of the classifier to obtain coded classification feature vectors; and inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
That is, in the technical solution of the present disclosure, the labels of the classifier include complete destruction (first label) and incomplete destruction (second label), where the classifier determines, through a soft maximum function, to which classification label the optimized multi-scale chip destruction area texture feature vector belongs. It should be noted that the first tag p1 and the second tag p2 do not include a manually set concept, and in fact, during the training process, the computer model does not have a concept of "whether to destroy completely", which is simply that there are two kinds of classification tags and the probability that the output feature is under the two kinds of classification tags, that is, the sum of p1 and p2 is one. Therefore, the classification result of whether to completely destroy is actually that the classification label is converted into the class probability distribution of the two classes conforming to the natural law, and the physical meaning of the natural probability distribution of the label is essentially used instead of the language text meaning of whether to completely destroy.
It should be appreciated that the role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
It is worth mentioning that full-concatenated coding (Fully Connected Encoding) is a method of coding input data through a full-concatenated layer. In deep learning, a fully connected layer means that each neuron is connected to all neurons of the previous layer, each connection having a weight for computing the output. The function of full concatenated coding is to convert input data into feature vectors with higher level representations. Through the learning of the weight and bias of the full connection layer, the network can automatically extract the relevant characteristics in the input data and encode the relevant characteristics into the characteristic vector with more expressive ability. Such encoding may capture higher levels of semantic information of the input data, enabling subsequent classifiers to better distinguish between different classes of data. In the application of the optimized multi-scale chip destruction area texture feature vector, the full-connection coding can convert the input texture feature vector into a coding classification feature vector. By learning appropriate weights and biases, the fully connected layer can convert texture feature vectors into more discriminative feature representations. The encoded feature vector can be better used for classification tasks, and classification results indicating whether the feature vector is completely destroyed can be obtained through a Softmax classification function input into a classifier. Namely, the full-connection coding converts input data into the feature vector with more expressive capacity by using the full-connection layer, so that the expressive capacity of the data is improved, and the subsequent classifier can better perform classification tasks.
Further, in the destruction method based on the chip self-destruction circuit in the embodiment of the disclosure, a self-destruction circuit is arranged inside or outside the chip, and the self-destruction circuit can be triggered by an external signal or an internal condition to generate high-temperature, high-voltage, high-current or high-frequency electric energy, so that the key components of the chip are irreversibly damaged. One or more protective layers are arranged on key parts of the chip, and the protective layers can prevent the damage of environmental factors to the chip in normal use, but cannot block the damage of electric energy generated by the self-destruction circuit to the chip when the self-destruction circuit is triggered. The package shell is arranged outside the chip, and can protect the chip from external physical, chemical or electromagnetic interference, but cannot prevent the penetration of electric energy generated by the self-destruction circuit to the chip when the self-destruction circuit is triggered. After the self-destruction circuit is triggered, key components of the chip are completely or partially destroyed, resulting in the chip having minimal risk of losing function or data leakage.
In summary, according to the destruction method based on the chip self-destruction circuit in the embodiment of the disclosure, the chip destruction area can be automatically analyzed to determine whether the chip is completely destroyed.
Further, embodiments of the present disclosure also provide a chip self-destruction circuit that operates with a method of destruction based on a chip self-destruction circuit as described in any of the preceding.
More specifically, the chip self-destruction circuit is applied to the chip inside, including control module, drive module and the load module that connects gradually, the one end ground connection of load module, the other end with drive module connects, wherein: the control module is used for outputting a high level so as to enable the chip self-destruction circuit to be conducted; the driving module is used for burning the chip when the current formed after the chip self-destruction circuit is conducted flows through the driving module; the load module is used for forming current in the self-destruction circuit of the chip when the self-destruction circuit is conducted. Further, the control module is further used for outputting low level, and then the self-destruction circuit of the chip is disconnected.
Fig. 5 shows a block diagram of a destruction system 100 based on a chip self-destruction circuit according to an embodiment of the present disclosure. As shown in fig. 5, a destruction system 100 based on a chip self-destruction circuit according to an embodiment of the present disclosure includes: the image acquisition module 110 is used for acquiring an image of a chip destruction area after the chip self-destruction circuit test acquired by the camera is started; the texture feature extraction module 120 is configured to perform texture feature extraction on the image of the chip destruction area to obtain a shallow full-perception feature vector of the texture of the chip destruction area and a deep full-perception feature vector of the texture of the chip destruction area; the vector fusion module 130 is configured to fuse the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area to obtain a texture feature vector of the multi-scale chip destruction area; and a destroy confirmation module 140, configured to determine whether to destroy completely based on the texture feature vector of the multi-scale chip destroy area.
In one possible implementation, the texture feature extraction module 120 includes: the HOG image calculation unit is used for calculating the HOG image of the chip destruction area image to obtain the chip destruction area HOG image; the image texture feature extraction unit is used for enabling the HOG image of the chip destruction area to pass through an image texture feature extractor based on a pyramid network to obtain a shallow texture feature map of the chip destruction area and a deep texture feature map of the chip destruction area; and the feature full-perception unit is used for respectively obtaining the feature vectors of the texture shallow layer and the deep layer of the chip destruction area through a feature full-perception module based on a full-connection layer.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described chip self-destruction circuit-based destruction system 100 have been described in detail in the above description of the chip self-destruction circuit-based destruction method with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
As described above, the destruction system 100 based on the chip self-destruction circuit according to the embodiment of the present disclosure may be implemented in various wireless terminals, for example, a server or the like having a destruction algorithm based on the chip self-destruction circuit. In one possible implementation, the destruction system 100 based on a chip self-destruction circuit according to embodiments of the present disclosure may be integrated into the wireless terminal as one software module and/or hardware module. For example, the destruction system 100 based on the chip self-destruction circuit may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the destruction system 100 based on the chip self-destruction circuit may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the chip self-destruction circuit based destruction system 100 and the wireless terminal may be separate devices, and the chip self-destruction circuit based destruction system 100 may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information in an agreed data format.
Fig. 6 shows an application scenario diagram of a destruction method based on a chip self-destruction circuit according to an embodiment of the present disclosure. As shown in fig. 6, in this application scenario, first, a chip destruction area image (e.g., D illustrated in fig. 6) after the start of the chip self-destruction circuit test acquired by a camera (e.g., C illustrated in fig. 6) is acquired, and then, the chip destruction area image is input to a server (e.g., S illustrated in fig. 6) where a chip self-destruction circuit-based destruction algorithm is deployed, wherein the server can process the chip destruction area image using the chip self-destruction circuit-based destruction algorithm to obtain a classification result for indicating whether or not to completely destroy.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (6)

1. The destroying method based on the chip self-destroying circuit is characterized by comprising the following steps of:
acquiring a chip destruction area image acquired by a camera after the chip self-destruction circuit test is started;
extracting texture features of the image of the chip destroying area to obtain a shallow texture full-perception feature vector of the chip destroying area and a deep texture full-perception feature vector of the chip destroying area;
fusing the shallow full-perception feature vector of the texture of the chip destruction area and the deep full-perception feature vector of the texture of the chip destruction area to obtain a texture feature vector of the multi-scale chip destruction area; and determining whether to completely destroy the chip based on the texture feature vector of the multi-scale chip destruction area.
2. The destruction method based on a chip self-destruction circuit according to claim 1, wherein the performing texture feature extraction on the chip destruction area image to obtain a shallow full-perception feature vector of a chip destruction area texture and a deep full-perception feature vector of a chip destruction area texture comprises:
calculating an HOG image of the chip destruction area image to obtain a chip destruction area HOG image;
the HOG image of the chip destroying area passes through an image texture feature extractor based on a pyramid network to obtain a shallow texture feature map of the chip destroying area and a deep texture feature map of the chip destroying area; and the texture shallow feature map of the chip destruction area and the texture deep feature map of the chip destruction area are respectively passed through a feature full-perception module based on a full-connection layer to obtain the texture shallow full-perception feature vector of the chip destruction area and the texture deep full-perception feature vector of the chip destruction area.
3. The destruction method based on a chip self-destruction circuit according to claim 2, wherein fusing the shallow full-perception feature vector of the chip destruction area texture and the deep full-perception feature vector of the chip destruction area texture to obtain a multi-scale chip destruction area texture feature vector, comprises:
and fusing the shallow full-perception feature vector of the texture of the chip destroying area and the deep full-perception feature vector of the texture of the chip destroying area by using principal component analysis to obtain the texture feature vector of the multi-scale chip destroying area.
4. The destruction method based on a chip self-destruction circuit as claimed in claim 3, wherein determining whether to completely destroy based on the multi-scale chip destruction area texture feature vector comprises:
performing feature distribution gain on the texture feature vector of the multi-scale chip destruction area to obtain an optimized texture feature vector of the multi-scale chip destruction area; and the texture feature vector of the optimized multi-scale chip destruction area passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the chip is completely destroyed or not.
5. The method for destroying a chip self-destruction circuit according to claim 4, wherein performing feature distribution gain on the multi-scale chip destruction area texture feature vector to obtain an optimized multi-scale chip destruction area texture feature vector comprises:
performing feature distribution gain on the texture feature vector of the multi-scale chip destruction area by using the following optimization formula to obtain the texture feature vector of the optimized multi-scale chip destruction area;
wherein, the optimization formula is:
wherein V is the texture feature vector of the multi-scale chip destruction area, L is the length of the texture feature vector of the multi-scale chip destruction area, and V is the length of the texture feature vector of the multi-scale chip destruction area i Is the characteristic value of the ith position of the texture characteristic vector of the multi-scale chip destruction area,representing the square of the two norms of the texture feature vector of the multi-scale chip destruction area, and alpha is a weighted hyper-parameter, exp (-) represents the exponential operation of a numerical value, the exponential operation of the numerical value represents the calculation of a natural exponential function value exponentiated by the numerical value, v' i Is the characteristic value of the ith position of the texture characteristic vector of the optimized multi-scale chip destruction area.
6. A chip self-destruction circuit, characterized by operating with the destruction method based on a chip self-destruction circuit as claimed in claims 1 to 5.
CN202311249335.2A 2023-09-26 2023-09-26 Chip self-destruction circuit and destruction method based on same Pending CN117235820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311249335.2A CN117235820A (en) 2023-09-26 2023-09-26 Chip self-destruction circuit and destruction method based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311249335.2A CN117235820A (en) 2023-09-26 2023-09-26 Chip self-destruction circuit and destruction method based on same

Publications (1)

Publication Number Publication Date
CN117235820A true CN117235820A (en) 2023-12-15

Family

ID=89087715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311249335.2A Pending CN117235820A (en) 2023-09-26 2023-09-26 Chip self-destruction circuit and destruction method based on same

Country Status (1)

Country Link
CN (1) CN117235820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117457032A (en) * 2023-12-25 2024-01-26 山东万里红信息技术有限公司 Storage medium destroying method based on volume identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117457032A (en) * 2023-12-25 2024-01-26 山东万里红信息技术有限公司 Storage medium destroying method based on volume identification
CN117457032B (en) * 2023-12-25 2024-03-22 山东万里红信息技术有限公司 Storage medium destroying method based on volume identification

Similar Documents

Publication Publication Date Title
Gou et al. Vehicle license plate recognition based on extremal regions and restricted Boltzmann machines
Aukkapinyo et al. Localization and classification of rice-grain images using region proposals-based convolutional neural network
Han et al. A novel computer vision-based approach to automatic detection and severity assessment of crop diseases
CN107506787B (en) A kind of glue into concrete beam cracks classification method based on migration self study
Hasan et al. Fingerprint image enhancement and recognition algorithms: a survey
JP2009514110A (en) Human detection by pause
CN117235820A (en) Chip self-destruction circuit and destruction method based on same
Chen et al. Learning capsules for vehicle logo recognition
Juneja et al. RETRACTED ARTICLE: Gender and Age Classification Enabled Blockschain Security Mechanism for Assisting Mobile Application
Chen et al. Exploring the use of iriscodes for presentation attack detection
Ibrahim et al. [Retracted] An Effective Approach for Human Activity Classification Using Feature Fusion and Machine Learning Methods
Kapsalas et al. Regions of interest for accurate object detection
Ribeiro et al. Brazilian mercosur license plate detection: a deep learning approach relying on synthetic imagery
Wirdiani et al. Real-time face recognition with eigenface method
Adeyanju et al. Development of an american sign language recognition system using canny edge and histogram of oriented gradient
CN113505716B (en) Training method of vein recognition model, and recognition method and device of vein image
Bakshi et al. ALPR-An Intelligent Approach Towards Detection and Recognition of License Plates in Uncontrolled Environments
Katiyar et al. Image forgery detection with interpretability
CN114373218A (en) Method for generating convolution network for detecting living body object
Li et al. Fingerprint liveness detection based on binarized statistical image feature with sampling from Gaussian distribution
Javed Mehedi Shamrat et al. Human Face recognition using eigenface, SURF method
Saleem et al. Multimedia forensic: an approach for splicing detection based on deep visual features
Tham et al. Development of a Bi-level Web Connected Home Access System using Multi-Deep Learning Neural Networks
Viedma et al. Fraud Attack Detection in Remote Verification Systems for Non-enrolled Users
Olague et al. Multiclass object recognition based on texture linear genetic programming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination