CN116151174A - General device model optimization method and system - Google Patents
General device model optimization method and system Download PDFInfo
- Publication number
- CN116151174A CN116151174A CN202310397733.2A CN202310397733A CN116151174A CN 116151174 A CN116151174 A CN 116151174A CN 202310397733 A CN202310397733 A CN 202310397733A CN 116151174 A CN116151174 A CN 116151174A
- Authority
- CN
- China
- Prior art keywords
- model
- training
- data
- current
- source voltage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005457 optimization Methods 0.000 title claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 82
- 238000003062 neural network model Methods 0.000 claims abstract description 21
- 238000013528 artificial neural network Methods 0.000 claims abstract description 19
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000010606 normalization Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 44
- 238000009499 grossing Methods 0.000 claims description 18
- 229920006395 saturated elastomer Polymers 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 239000004065 semiconductor Substances 0.000 abstract description 3
- 238000004088 simulation Methods 0.000 description 23
- INQLNSVYIFCUML-QZTLEVGFSA-N [[(2r,3s,4r,5r)-5-(6-aminopurin-9-yl)-3,4-dihydroxyoxolan-2-yl]methoxy-hydroxyphosphoryl] [(2r,3s,4r,5r)-5-(4-carbamoyl-1,3-thiazol-2-yl)-3,4-dihydroxyoxolan-2-yl]methyl hydrogen phosphate Chemical compound NC(=O)C1=CSC([C@H]2[C@@H]([C@H](O)[C@@H](COP(O)(=O)OP(O)(=O)OC[C@@H]3[C@H]([C@@H](O)[C@@H](O3)N3C4=NC=NC(N)=C4N=C3)O)O2)O)=N1 INQLNSVYIFCUML-QZTLEVGFSA-N 0.000 description 17
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000010801 machine learning Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000011165 process development Methods 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 235000010205 Cola acuminata Nutrition 0.000 description 1
- 244000228088 Cola acuminata Species 0.000 description 1
- 235000015438 Cola nitida Nutrition 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002060 nanoflake Substances 0.000 description 1
- 239000002064 nanoplatelet Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/30—Circuit design
- G06F30/36—Circuit design at the analogue level
- G06F30/373—Design optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/30—Circuit design
- G06F30/36—Circuit design at the analogue level
- G06F30/367—Design verification, e.g. using simulation, simulation program with integrated circuit emphasis [SPICE], direct methods or relaxation methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Liquid Crystal Display Device Control (AREA)
Abstract
The invention discloses a general device model optimization method and a general device model optimization system, which relate to the technical field of semiconductor device modeling, and are characterized in that firstly, device parameters and voltage bias are subjected to data preprocessing; secondly, constructing a neural network model by adopting a mean square relative error and a multi-objective loss function; and finally, randomly selecting data under a plurality of groups of different parameter combinations from the data, performing preliminary training on the model, and importing all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network. The invention integrates the methods of data normalization processing, multi-objective loss function, model pre-training and the like, and the model obtained by the method can ensure the current precision of the device under various parameter combinations and bias voltage and can effectively shorten the training time.
Description
Technical Field
The invention relates to the technical field of semiconductor device modeling, in particular to a general device model optimization method and system.
Background
Currently, TCAD simulation plays a very important role in semiconductor process development. However, TCAD simulation is slow, and when the process parameter combination is optimized, the required simulation times are large, so that the process development efficiency is reduced. According to the historical data of TCAD simulation, a general model from the process parameters to the electrical characteristics of the device is established, and the TCAD simulator is replaced for calculation, so that the optimization efficiency of the process parameters can be improved. In order to overcome this problem of TCAD simulation, different machine learning algorithms have been used for device simulation, and the algorithms can accurately simulate the results of device simulation using the provided training data. Most importantly, the algorithm is provided with predictability, namely, the result of TCAD simulation of the device outside the provided parameter range can be predicted, so that the algorithm can be combined with the device simulation, and a predictable model is built by using a certain amount of TCAD simulation data through the algorithm, so that the time required by a large amount of TCAD simulation can be reduced.
The machine learning used by Kashyap Mehta, hiu-Y ung Wong et al in 2021 to predict FinFET current-voltage and capacitance-voltage curves demonstrates the possibility of predicting generic device IV and CV curves by machine training using limited training data (groups 25-50).
In 2021 Chandni Akbar, yiming Li et al proposed a Machine Learning (ML) assisted simulation of three-dimensional multi-channel gate all-silicon nanoflake MOSFET work function fluctuation devices. The proposed ML-RFR algorithm for predicting ID-VG curves shows the same accuracy as the device simulation.
2022 r. Butola, y.li and s.r. Kola propose a machine learning based approach to modeling the internal parameters of all silicon nanoplatelet MOSFETs, which results indicate that the output of the proposed model predictions has an R2 score of 99% and an error rate of less than 1%.
It can be seen that machine learning has been used to build device models instead of TCAD simulation, saving significant TCAD simulation time. However, the model precision, training time and the universality of the modeling algorithm have great optimization and promotion space.
Disclosure of Invention
In view of the above, the invention provides a general device model optimization method and system, so as to solve the problems of model precision, model training time and model universality existing in the device model establishment process.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in one aspect, a method for optimizing a generic device model is provided, comprising the steps of:
step 1, preprocessing data of device parameters and voltage bias;
step 2, constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
and 3, randomly selecting data under a plurality of groups of different parameter combinations from the data, performing preliminary training on the model, and importing all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
Optionally, the preprocessing specifically includes:
firstly, normalizing the structural parameters of a device;
secondly, performing conversion function processing on the current;
and finally, performing smoothing function processing on the gate-source voltage and the drain-source voltage.
Optionally, the formula of the normalization process is:
Optionally, the conversion function is:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->And converting corresponding data.
Optionally, the smoothing functions of the gate-source voltage and the drain-source voltage are respectively:
wherein ,is constant (I)>Drain-source voltage>Is gate-source voltage>、/>The drain-source voltage and the gate-source voltage after smoothing are respectively adopted.
Optionally, the mean square relative error is:
Optionally, the multi-objective loss function is:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of the off-state current,Ais a current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples.
In another aspect, a generic device model optimization system is provided, comprising the following modules:
the preprocessing module is used for preprocessing data of device parameters and voltage bias;
the neural network model construction module is used for constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
the training module of the neural network randomly selects data under a plurality of groups of different parameter combinations from the data, performs preliminary training on the model, and guides all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
Optionally, the preprocessing module includes: and a conversion function unit converting the current data using the following formula:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->The converted corresponding data;
the device also comprises a smoothing function unit which respectively processes the gate-source voltage and the drain-source voltage:
wherein ,is constant (I)>Drain-source voltage>Is gate-source voltage>、/>The drain-source voltage and the gate-source voltage after smoothing are respectively adopted.
Optionally, the neural network model building module comprises a mean square relative error calculation unit and a multi-objective loss function calculation unit;
the mean square relative error calculating unit is as follows:
The multi-objective loss function calculation unit is as follows:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of the off-state current,Ais a current weight coefficient,BIs a saturated current weight coefficient,CIs closed toA state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples.
Compared with the prior art, the invention discloses a general device model optimization method and system, which integrate the methods of data normalization processing, multi-objective loss function, model pre-training and the like, and the model obtained by the algorithm can ensure the current precision of the device under various parameter combinations and bias voltage and can effectively shorten the training time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a neural network according to embodiment 2 of the present invention;
fig. 2a is a diagram showing comparison between TCAD simulation and model prediction of the output characteristic curve of the MOSFET device with planar structure according to embodiment 3 of the present invention;
fig. 2b is a diagram showing a comparison between TCAD simulation and model prediction of a transfer characteristic curve of a MOSFET device with a planar structure according to embodiment 3 of the present invention;
fig. 3a is a diagram showing comparison between TCAD simulation and model prediction of the output characteristic curve of the MOSFET device with the ring gate structure according to embodiment 3 of the present invention;
fig. 3b is a diagram showing a comparison between TCAD simulation and model prediction of the transfer characteristic curve of the MOSFET device with the gate-all-around structure according to the embodiment 3 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
On the one hand, embodiment 1 of the invention discloses a general device model optimization method, which comprises the following steps:
step 1, preprocessing data of device parameters and voltage bias;
step 2, constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
and 3, randomly selecting data under a plurality of groups of different parameter combinations from the data, performing preliminary training on the model, and importing all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
In a specific embodiment, the pretreatment specifically comprises:
firstly, normalizing the structural parameters of a device;
secondly, performing conversion function processing on the current;
and finally, performing smoothing function processing on the gate-source voltage and the drain-source voltage.
In a specific embodiment, the normalization process is formulated as:
In a specific embodiment, the transfer function is:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->And converting corresponding data.
In a specific embodiment, the smoothing functions of the gate-source voltage and the drain-source voltage are respectively:
wherein ,is constant (I)>Drain-source voltage>Is gate-source voltage>、/>The drain-source voltage and the gate-source voltage after smoothing are respectively adopted.
In one specific embodiment, the mean square relative error is:
In a specific embodiment, the multiple objective loss function is:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of the off-state current,Ais a current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples.
In another aspect, a generic device model optimization system is provided, comprising the following modules:
the preprocessing module is used for preprocessing data of device parameters and voltage bias;
the neural network model construction module is used for constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
the training module of the neural network randomly selects data under a plurality of groups of different parameter combinations from the data, performs preliminary training on the model, and guides all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
In a specific embodiment, the preprocessing module comprises: and a conversion function unit converting the current data using the following formula:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->And converting corresponding data.
The device also comprises a smoothing function unit which respectively processes the gate-source voltage and the drain-source voltage:
wherein ,is constant (I)>Drain-source voltage>Is gate-source voltage>、/>The drain-source voltage and the gate-source voltage after smoothing are respectively adopted.
In a specific embodiment, the neural network model building module comprises a mean square relative error calculation unit and a multi-objective loss function calculation unit;
the mean square relative error calculating unit is as follows:
The multi-objective loss function calculation unit is as follows:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of off-state current, A is the current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples.
Example 2 is introduced for further explanation in order to further understand the technical scheme of the present invention.
On the one hand, embodiment 2 of the invention discloses a general device model optimization method, which comprises the following steps:
1. data processing
The data contains input parameters (device parameters and voltage bias), and for the device parameters, there may be a problem that the magnitude of the different device parameters is too different, and directly taking the data as the input of the model may cause the magnitude-less parameter influence to be ignored, and the magnitude-greater parameter influence to be amplified. Based on this, the present embodiment performs normalization processing on the device parameter data.
Meanwhile, because the magnitude change of current data from a pinch-off region to a saturation region is large, the direct adoption of a data construction model can lead to low precision of the pinch-off region, so that data are required to be converted, the selection of a conversion function is important to the precision of the model, and the current data are processed as follows:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->And converting corresponding data.
Taking into account thatWhen this conversion is not possible, the invention will therefore +>And deleting the data. The conversion ensures->Time->So the pruning of data is reasonable. This way is achieved at->Limited by large variations in magnitudeyIs described.
The model is used for radio frequency distortion simulation to ensure IV curve thereofThis is particularly important when passing Gummel test, which introduces +.> and />Is a voltage smoothing function of (a).
wherein ,is constant (I)>Drain-source voltage>Is gate-source voltage>、/>The drain-source voltage and the gate-source voltage after smoothing are respectively adopted.
2. Model construction
In order to realize high accuracy of the neural network model, a loss function and a network architecture adopted in the training of the neural network are important. The neural network structure adopted in this embodiment is shown in fig. 1. The MSE (mean square error) error function commonly used by the neural network has higher precision of a near pinch-off region but poor precision of a saturation region, and the current curve has unreasonable trend when the grid voltage is larger.
Therefore, the embodiment adopts the mean square relative error and considers the off-state current) And saturation current%) Precision requirements, using a multi-objective loss function:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of the off-state current,Ais a current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples.
The mean square relative error is:
3. Model training
For training of the neural network, the initial weight and bias settings of the neural network are particularly important for the training duration, and generally the initial parameters of the neural network are set randomly, and then the loss function is continuously adjusted and reduced through the training process to find the figure of merit, but a long training time is caused in the case of a large amount of training data. Based on this, the embodiment adopts a model pre-training method, randomly extracts data under 50 groups of different parameter combinations, performs preliminary training on the model, has short training time due to small data volume, and finally performs model training on a large amount of data on the basis of the model.
Example 3 is introduced for further explanation in order to further understand the technical scheme of the present invention.
The embodiment 3 of the invention provides an optimization algorithm for an N-type MOSFET device model which can be used for a planar structure and a gate-all-around structure, and the model obtained by the algorithm is verified to show that:
comparison with TCAD simulation dataModel predictive current [ ]) Error less than 1%, saturation current (>) Error less than 3%, off-state current (+)>) Error less than 11%, threshold voltage (>) The error is less than 0.005V, and the accuracy of the algorithm and the universality of the model are verified.
The specific steps of this embodiment 3 include data preprocessing, model construction, and model prediction.
(1) And (5) data processing. According to the provided training data parameter combination file, corresponding voltage and current data under different device structure parameter combinations are searched out and are arranged into a data table, the front column corresponds to the device structure parameter, and the rear three columns respectively correspond to the grid source voltageDrain-source voltage->And output current +.>. Then, device structure parameter normalization processing and current logarithmization processing are carried out, and in order to make the model pass Gummel test, the method is carried out on +> and />And (5) performing smoothing function processing.
(2) And (5) constructing a model. According to the data table obtained in the first step, parameters are divided into input parameters and output parameters, firstly, pre-training of a model is carried out, data under 50 groups of different device parameter combinations are randomly extracted from the data, preliminary training of a neural network is carried out, then on the basis, all training data are imported into the model, training of the neural network is carried out, and the final model is obtained after the proposed multi-objective loss function based on relative errors is utilized for continuous iterative optimization until the loss is smaller than 0.00001.
(3) Model prediction. And according to the provided test data parameter combination file, corresponding voltage data under different device structure parameter combinations are searched out, and are used as input to finally obtain a prediction result of the model.
Example planar structure MOSFET device current output characteristics and transfer characteristics TCAD simulation and model prediction pairs such as shown in fig. 2a, 2b, and gate-all-around structure MOSFET device current output characteristics and transfer characteristics TCAD simulation and model prediction pairs such as shown in fig. 3a, 3 b.
The model training and testing time of the MOSFET device with the two structures of the plane and the ring gate in the embodiment are shown in the table 1, and the model precision index off-state current is shown in the specification) Saturation current (+)>) And threshold voltage->The specific parameter errors of (2) are shown in table 2.
Table 1 model training and test time for MOSFET devices of two structures, planar and circular gate
Device structure | Training time | Test time | Total time of |
Ring grid | 762.62s | 878.23s | 1640.85s |
Plane surface | 2210.53s | 1442.42s | 3652.95s |
Table 2 specific parameter error of model accuracy index of MOSFET device with two structures of plane and ring gate
In summary, the present embodiment provides a TCAD-oriented fast and high-precision general device model algorithm, which solves the problems of model precision and long model training time by adopting a data processing, a multi-objective loss function based on relative errors, and a model pre-training method.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method for optimizing a generic device model, comprising the steps of:
step 1, preprocessing data of device parameters and voltage bias;
step 2, constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
and 3, randomly selecting data under a plurality of groups of different parameter combinations from the data, performing preliminary training on the model, and importing all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
2. The method for optimizing a generic device model according to claim 1, wherein the preprocessing specifically comprises:
firstly, normalizing the structural parameters of a device;
secondly, performing conversion function processing on the current;
and finally, performing smoothing function processing on the gate-source voltage and the drain-source voltage.
5. The method for optimizing a generic device model according to claim 2, wherein the smoothing functions of the gate-source voltage and the drain-source voltage are respectively:
7. The method of claim 1, wherein the multi-objective loss function is:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is of current typeMean square relative error, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of the off-state current,Ais a current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresenting the ith training sample.
8. A generic device model optimization system comprising the following modules:
the preprocessing module is used for preprocessing data of device parameters and voltage bias;
the neural network model construction module is used for constructing a neural network model by adopting a mean square relative error and a multi-objective loss function;
the training module of the neural network randomly selects data under a plurality of groups of different parameter combinations from the data, performs preliminary training on the model, and guides all the data into the neural network model on the basis of the preliminary training model to perform training on the neural network.
9. The generic device model optimization system of claim 8, wherein the preprocessing module comprises: and a conversion function unit converting the current data using the following formula:
wherein ,for outputting current, +.>For the drain-source voltage to be applied,yis->The converted corresponding data;
the device also comprises a smoothing function unit which respectively processes the gate-source voltage and the drain-source voltage:
10. The general device model optimization system of claim 8, wherein the neural network model building module comprises a mean square relative error calculation unit and a multi-objective loss function calculation unit;
the mean square relative error calculating unit is as follows:
the multi-objective loss function calculation unit is as follows:
wherein ,for the off-state current, +.>In the event of a saturated current flow,Mrepresenting current,/->Is the mean square relative error of the current, +.>Is the mean square relative error of the saturation current, +.>Is the mean square relative error of off-state current, A is the current weight coefficient,BIs a saturated current weight coefficient,CIs an off-state current weight coefficient,NsFor training data volume、iRepresent the firstiTraining samples. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310397733.2A CN116151174A (en) | 2023-04-14 | 2023-04-14 | General device model optimization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310397733.2A CN116151174A (en) | 2023-04-14 | 2023-04-14 | General device model optimization method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116151174A true CN116151174A (en) | 2023-05-23 |
Family
ID=86356477
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310397733.2A Pending CN116151174A (en) | 2023-04-14 | 2023-04-14 | General device model optimization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116151174A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109791627A (en) * | 2018-06-19 | 2019-05-21 | 香港应用科技研究院有限公司 | Using input pretreatment and switch target for training the semiconductor devices of deep neural network to model |
US20190385047A1 (en) * | 2018-06-19 | 2019-12-19 | Hong Kong Applied Science and Technology Research Institute Company, Limited | Semiconductor Device Modeling Using Input Pre-Processing and Transformed Targets for Training a Deep Neural Network |
US20220114317A1 (en) * | 2020-10-13 | 2022-04-14 | Samsung Electronics Co., Ltd. | Systems, methods, and computer program products for transistor compact modeling using artificial neural networks |
US20230025626A1 (en) * | 2021-07-20 | 2023-01-26 | Samsung Electronics Co., Ltd. | Method and apparatus for generating process simulation models |
-
2023
- 2023-04-14 CN CN202310397733.2A patent/CN116151174A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109791627A (en) * | 2018-06-19 | 2019-05-21 | 香港应用科技研究院有限公司 | Using input pretreatment and switch target for training the semiconductor devices of deep neural network to model |
US20190385047A1 (en) * | 2018-06-19 | 2019-12-19 | Hong Kong Applied Science and Technology Research Institute Company, Limited | Semiconductor Device Modeling Using Input Pre-Processing and Transformed Targets for Training a Deep Neural Network |
US20220114317A1 (en) * | 2020-10-13 | 2022-04-14 | Samsung Electronics Co., Ltd. | Systems, methods, and computer program products for transistor compact modeling using artificial neural networks |
US20230025626A1 (en) * | 2021-07-20 | 2023-01-26 | Samsung Electronics Co., Ltd. | Method and apparatus for generating process simulation models |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
LiKamWa et al. | Redeye: analog convnet image sensor architecture for continuous mobile vision | |
CN103678941B (en) | The Forecasting Methodology of electrode air gap breakdown voltage | |
CN101859383B (en) | Hyperspectral remote sensing image band selection method based on time sequence important point analysis | |
US7761275B2 (en) | Synthesizing current source driver model for analysis of cell characteristics | |
JP5006214B2 (en) | Variation simulation system, variation determination model method and apparatus, and program | |
Choi et al. | Neural approach for modeling and optimizing Si-MOSFET manufacturing | |
Akbar et al. | Deep learning algorithms for the work function fluctuation of random nanosized metal grains on gate-all-around silicon nanowire MOSFETs | |
Aminzadeh | Systematic circuit design and analysis using generalised gm/ID functions of MOS devices | |
Thakker et al. | A novel table-based approach for design of FinFET circuits | |
CN111079361A (en) | Load modeling method of FPGA circuit | |
CN112580288B (en) | Semiconductor device characteristic modeling method and system based on multi-gradient neural network | |
CN116151174A (en) | General device model optimization method and system | |
He et al. | Analytic carrier-based charge and capacitance model for long-channel undoped surrounding-gate MOSFETs | |
CN111859627B (en) | Parameter optimization method and device for component model | |
US20130125080A1 (en) | Circuit optimization method and apparatus for analog circuit migration | |
Li | A simulation-based evolutionary approach to LNA circuit design optimization | |
CN112685958A (en) | SiC MOSFET blocking voltage determination method based on neural network | |
CN115271154B (en) | Nonlinear regression flood element prediction method based on polynomial and partial least square coupling | |
CN112765933B (en) | Integrated circuit design method based on population optimization algorithm | |
Zhou et al. | Prediction of SET on SRAM based on WOA-BP neural network | |
Wang et al. | Optimization and Performance Prediction of Tunnel Field‐Effect Transistors Based on Deep Learning | |
Wang et al. | DC-Model: A New Method for Assisting the Analog Circuit Optimization | |
Li et al. | Capacitance characteristic optimization of germanium MOSFETs with aluminum oxide by using a semiconductor-device-simulation-based multi-objective evolutionary algorithm method | |
Song et al. | Hardware-aware neural architecture search for stochastic computing-based neural networks on tiny devices | |
Malik et al. | Sparse regression driven mixture importance sampling for memory design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |