CN106776442B - FPGA transistor size adjusting method - Google Patents

FPGA transistor size adjusting method Download PDF

Info

Publication number
CN106776442B
CN106776442B CN201611105208.5A CN201611105208A CN106776442B CN 106776442 B CN106776442 B CN 106776442B CN 201611105208 A CN201611105208 A CN 201611105208A CN 106776442 B CN106776442 B CN 106776442B
Authority
CN
China
Prior art keywords
transistor
delay
fpga
area
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611105208.5A
Other languages
Chinese (zh)
Other versions
CN106776442A (en
Inventor
钱涵晶
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611105208.5A priority Critical patent/CN106776442B/en
Publication of CN106776442A publication Critical patent/CN106776442A/en
Application granted granted Critical
Publication of CN106776442B publication Critical patent/CN106776442B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation

Abstract

The invention relates to an FPGA (field programmable gate array) architecture exploration method, which combines an accurate model with a GA (genetic algorithm) algorithm and adjusts the modelAnd the size of the transistor realizes the compromise optimization of time delay and area. Therefore, the invention discloses a method for adjusting the size of an FPGA transistor, which comprises the following steps: 1) determining key parameters influencing FPGA delay; 2) establishing a corresponding Elmore delay model for each circuit; 3) combining an Elmore delay model of the FPGA with a neural network, establishing a KBNN delay model, training the KBNN delay model, and determining a training error EtAnd a verification error EvThe minimum weights Ω and Φ and the number of hidden neurons m; 4) establishing an improved minimum width transistor area model, and estimating the area of an FPGA island; 5) and the delay model, the area model and the GA algorithm are combined to realize rapid transistor size adjustment. The invention is mainly applied to FPGA design occasions.

Description

FPGA transistor size adjusting method
Technical Field
The invention relates to an FPGA architecture exploration method, in particular to an FPGA transistor size adjusting method.
Background
In the Field-Programmable Gate Array (FPGA) architecture exploration process, a transistor-level design tool is indispensable because it can provide accurate delay and area estimation for different architectures, thereby implementing architecture evaluation. The transistor level design includes the selection of circuit topologies for the different sub-circuits to achieve the architecture. And the transistor sizing can also improve the area, delay and power consumption of the FPGA. The design of FPGAs is a complex iterative process of transistor-level design for different architectures. Accurate delay and area models are essential to obtain correct transistor sizing results.
Currently, there are three methods that can be used to estimate the delay of an FPGA. In the first method, delay estimation is performed based on a numerical analysis model. The Elmore model is typically used to calculate the delay of an FPGA. Smith et al uses an Elmore model to obtain FPGA delay, and combines it with GP (geometrical Programming) algorithm to optimize high-level architecture parameters and transistor size simultaneously to realize area and delay compromise. The above-described analytical model-based method, although having great advantages in speed, is not highly accurate because it linearly equalizes transistors. In the second method, a complete layout is established for the FPGA circuit and a circuit simulation tool is used to determine the delay. Although the method is accurate, 6-15 hours are needed for completing one complete architecture exploration by using a circuit simulation tool HSPICE, and the time is long. In a third approach, an analytical model is combined with circuit simulation. If a two-stage method is adopted to optimize the area and the time delay, a linear model is used in the exploration stage, and a TILOS algorithm is used in the fine adjustment stage based on HSPICE to adjust the transistor size. This method is a compromise between accuracy and speed of the two methods described above. However, the use of circuit emulators still reduces the efficiency of the architecture exploration.
In order to accelerate the design process and accurately find a proper framework, the invention utilizes a Knowledge-based Neural Network (KBNN) to obtain the delay of the FPGA, utilizes an improved minimum width transistor model to obtain a corresponding area, and combines the delay and the area with a GA (genetic Algorithm) algorithm to quickly and accurately finish the transistor size adjustment. The KBNN combines the strong learning capacity provided by the neural network and the delay change trend of the established FPGA analysis model, so that the neural network and the analysis model are complementary with each other, the physical significance between problems is kept, and the relationship between parameters can be intuitively reflected. Therefore, the KBNN is used for constructing the relation between the delay and the FPGA architecture parameter and the transistor size, and the method can improve the model precision and does not obviously increase the estimation time.
Reference documents:
[1]A.M.Smith,G.A.Constantinides,P.Y.K.Cheung.FPGA architectureoptimization using geometric programming[J].Computer-Aided Design ofIntegrated Circuits and Systems.2010,29(8):1163-1176.
[2]C.Chiasson,V.Betz.COFFE:Fully-automated transistor sizing forFPGAs[C].Field-Programmable Technology(FPT),Kyoto,2013:34-41.
[3]I.Kuon,J.Rose.Exploring area and delay tradeoffs in FPGAs witharchitecture and automated transistor design[J].Very Large Scale Integration(VLSI)Systems.2011,19(1):71-84。
disclosure of Invention
To overcome the deficiencies of the prior art, the present invention is directed to a method for quickly and accurately adjusting transistor dimensions. The KBNN delay model can keep the nonlinearity of the circuit and takes account of the delay of a connecting line between logic and wiring resources which is usually ignored and the connection delay inside the multiplexer MUX and the lookup table LUT; the improved minimum width transistor area model improves accuracy by calculating the area of the NMOS and CMOS, respectively. The accurate model is combined with the GA algorithm, and the compromise optimization of time delay and area is realized by adjusting the size of the transistor. Therefore, the technical scheme adopted by the invention is that the FPGA transistor size adjusting method comprises the following steps:
1) determining key parameters influencing FPGA delay;
2) establishing a corresponding Elmore delay model for each circuit according to the influence of parameters on the delay of each sub-circuit in the FPGA;
3) combining an Elmore delay model of the FPGA with a neural network, establishing a KBNN delay model, training the KBNN delay model, and determining a training error EtAnd a verification error EvThe minimum weights Ω and Φ and the number of hidden neurons m;
4) establishing an improved minimum width transistor area model, and estimating the area of an FPGA island;
5) and the delay model, the area model and the GA algorithm are combined to realize rapid transistor size adjustment.
The key parameters refer to 8 architecture parameters, which are respectively: width W of wiring channel, number N of basic logic units in logic block, input number K of look-up table LUT, line length L, input number I of logic block, flexibility F of switch blocksNumber of routing tracks F to which the input pins of the logic block can be connectedcinNumber of routing tracks F to which the output pins of the logic block can be connectedcout(ii) a The sub-circuit delays of an FPGA can be expressed in the form of equation (1):
Tn=fn(N,K,W,L,I,Fs,Fcin,Fcout,S1,...,Sl) (1)
wherein, TnRepresenting the delay of the FPGA sub-circuit n, 1<=n<=7,SiIs the transistor size of the sub-circuit n, 1<=i<L. Switching block Elmore delay model:
Figure BDA0001171228540000021
wherein, Cj,SBmux1,Cj,SBmux2Junction capacitances, C, of a primary transistor and a secondary transistor in the switch block multiplexer, respectivelyg,SBdrv 1、Cg,SBdrv 2Is the gate capacitance of the transistor in the switch block buffer, Cj,SBdrv1、Cj,SBdrv2Is the junction capacitance of the transistor in the switch block buffer, Cj,CBmux1Is a transistor junction capacitor connecting the block multiplexer; the delay model for the remaining subcircuits is similar to equation (2).
The KBNN delay model structure comprises a multilayer perceptron MLP (Multi layer Perceptron) neural network and a knowledge neuron, and the input parameters in the formula (1) determine the number of input neurons in the KBNN structure. Input gamma of each hidden neuroniThe sum of the weights of the input parameters is, an activation function in a hidden neuron adopts a sigmoid function, an output neuron of a 3-layer MLP is a weighted sum of outputs of the hidden neuron, the output of the 3-layer MLP is a difference between an estimated value and a true value of delay, a knowledge neuron is an established FPGA delay model based on Elmore, and the output of KBNN is the sum of the outputs of the 3-layer MLP and the knowledge neuron;
finally, the KBNN time-delay model is trained through the following algorithm, and the training error E is determinedtAnd a verification error EvThe weight between the smallest input neuron and the hidden neuron omega and the weight between the hidden neuron and the MLP output neuron phi and the number of hidden neurons m.
The invention has the characteristics and beneficial effects that:
1. compared with the traditional transistor size adjusting method based on HSPICE, the transistor size adjusting method can quickly finish the transistor size adjustment under the condition of a large number of iterations, not only can the parameters influencing the time delay be comprehensively considered, but also the relation between each parameter and the time delay is more intuitively reflected, and the obtained result can be applied to a framework exploration tool.
2. By utilizing the knowledge-based neural network, the training data volume is reduced, and meanwhile, the nonlinear relation in the FPGA delay is kept, so that the result is more accurate.
Description of the drawings:
FIG. 1 illustrates the effect of architectural parameters on latency.
FIG. 2 is a transistor level structure and equivalent RC model of a switch block in an FPGA sub-circuit.
Fig. 3 is a neural network structure of the delay model.
Fig. 4 is a minimum width transistor area model.
Detailed Description
The invention provides a method for accurately adjusting the size of a transistor, which can quickly find a result meeting a design target under the requirement of a large number of design iterations, and the finally obtained optimization result can be applied to a framework exploration tool to accelerate the framework exploration process. The specific technical scheme is as follows:
1) and determining key parameters influencing the FPGA delay.
2) And establishing a corresponding Elmore delay model for each circuit according to the influence of parameters on the delay of each sub-circuit in the FPGA.
3) Combining an Elmore delay model of the FPGA with a neural network, establishing a KBNN delay model, training the KBNN delay model, and determining a training error EtAnd a verification error EvThe minimum weights Ω and Φ and the number of hidden neurons m.
4) And establishing an improved minimum width transistor area model and estimating the area of the FPGA island.
5) And the delay model, the area model and the GA algorithm are combined to realize rapid transistor size adjustment.
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description will take the connection block in the FPGA sub-circuit as an example.
1. Determining key parameters affecting FPGA delay
In order to explain that the change of the architecture parameters can influence the time delay of each sub-circuit in the FPGA, the invention utilizes HSPICE to carry out a series of experiments on a connecting block in the FPGA, under the condition of keeping the process parameters unchanged, the values of 8 architecture parameters are sequentially changed, the change of the time delay is observed, and the result is as shown in figure 1Shown in the figure. As can be seen from the figure, the delay of the connection block varies significantly with the variation of the architecture parameter, so the influence of 8 architecture parameters on each sub-circuit is fully considered in the delay model, that is, each sub-circuit has 8 input parameters. Wherein, 8 architectural parameters are respectively: width W of wiring channel, number N of basic logic units in logic block, input number K of look-up table LUT, line length L, input number I of logic block, flexibility F of switch blocksNumber of routing tracks F to which the input pins of the logic block can be connectedcinNumber of routing tracks F to which the output pins of the logic block can be connectedcout
Each transistor in the FPGA circuit can be equivalent to an RC network, and the resistance and capacitance in the transistor are related to the transistor size, which in turn affects the delay. Furthermore, to achieve transistor sizing, transistor size is used as part of the delay model input parameters. Therefore, the sub-circuit delays of the FPGA can be expressed in the form of equation (1).
Tn=fn(N,K,W,L,I,Fs,Fcin,Fcout,S1,...,Sl) (1)
Wherein, TnRepresenting the delay of the FPGA sub-circuit n (1)<=n<=7),Si(1<=i<I.e. l) are the transistor sizes of the sub-circuit n, the number of transistors is different for different sub-circuits, i.e. the number of input parameters of the delay model is different for different sub-circuits.
2. Establishing Elmore delay model for each sub-circuit of FPGA
All subcircuits in the FPGA are made of transistors, which can be modeled using the Elmore model. Due to space limitations, the invention takes a switch block as an example for analysis, the delays of other sub-circuits can be obtained by a similar method, and the transistor model and the equivalent RC network of the switch block are shown in FIG. 2. The switch block includes a switch block buffer and a two-stage multiplexer.
The time delay of the switch block can be obtained by calculating the formula (2)
Figure BDA0001171228540000041
Wherein, Cj,SBmux1,Cj,SBmux2Junction capacitances, C, of a primary transistor and a secondary transistor in the switch block multiplexer, respectivelyg,SBdrv 1,Cg,SBdrv 2(Cj,SBdrv1,Cj,SBdrv2) Is the gate capacitance (junction capacitance) of the transistor in the switch block buffer, Cj,CBmux1Is a transistor junction capacitance connecting the block multiplexer.
3. Establishing KBNN delay model
The structure of KBNN is shown in FIG. 3, and includes a Multilayer Perceptron (MLP) neural network and a knowledge neuron. The input parameters in equation (1) determine the number of input neurons in the KBNN structure. Input gamma of each hidden neuroniIs the weighted sum of these input parameters. The activation function in the hidden neuron adopts a sigmoid function. The output neuron of the 3-layer MLP is a weighted sum of the hidden neuron outputs. The output of the 3-layer MLP is the difference between the estimated value of the delay and the true value. The knowledge neuron is an established FPGA delay model based on Elmore. The output of KBNN is the sum of the outputs of the 3-layer MLP and the knowledge neurons.
Finally, the KBNN time-delay model is trained through the following algorithm, and the training error E is determinedtAnd a verification error EvThe weight between the smallest input neuron and the hidden neuron omega and the weight between the hidden neuron and the MLP output neuron phi and the number of hidden neurons m.
4. Establishing minimum width transistor area model
The minimum width transistor area model is shown in fig. 4, which is defined as the smallest accessible transistor under a particular process technology, the area being the sum of the area of the transistor itself and the space adjacent to it.
The present invention will use an improved version of the minimum width transistor area model. And (3) calculating the area of the NMOS transistor by using a formula (3), and calculating the area of the CMOS by using a formula (4) so as to realize the purpose of reducing the area by inserting the PMOS transistor into the N-well space. Where x is the drive strength of the transistor.
Figure BDA0001171228540000042
Figure BDA0001171228540000051
5. Transistor size adjusting method
As shown in formula (5), the optimization target is that the product of the delay T and the area a of the FPGA island is minimum, and the weight of the delay and the area is determined by adjusting the weight α, Si(1<=i<L) is the size of each transistor in the circuit.
minimize Tα(S1,S2...Sl)×A1-α(S1,S2...Sl) (5)
Since the GA algorithm can solve various optimization problems and can be quickly implemented in the present invention. Therefore, the time delay model, the area model and the GA algorithm are combined, and the transistor size adjustment result is obtained quickly and accurately. Finally, the speed of the architecture exploration by using the method is improved by nearly 2863 times compared with the method based on HSPICE, and the error is less than 3 percent.

Claims (1)

1. A method for adjusting the size of an FPGA transistor is characterized by comprising the following steps:
1) determining key parameters influencing FPGA delay;
2) establishing a corresponding Elmore delay model for each circuit according to the influence of parameters on the delay of each sub-circuit in the FPGA;
3) combining an Elmore delay model of the FPGA with a neural network, establishing a KBNN delay model, training the KBNN delay model, and determining a training error EtAnd a verification error EvThe weight omega between the minimum input neuron and the hidden neuron, the weight phi between the hidden neuron and the MLP output neuron and the number m of the hidden neurons are calculated;
4) establishing an improved minimum width transistor area model, and estimating the area of an FPGA island;
the minimum width transistor area is defined as the minimum accessible transistor under a specific process technology, the area is the sum of the area of the transistor and the space adjacent to the transistor, the area of the NMOS transistor is calculated by using a formula (3), and the area of the CMOS transistor is calculated by using a formula (4):
Figure FDA0002612770200000011
Figure FDA0002612770200000012
wherein x is the drive strength of the transistor;
5) utilizing the KBNN delay model in the step 3), the area model in the step 4) and a GA algorithm represented by the following formula:
minimize Tα(S1,S2...Sl)×A1-α(S1,S2...Sl)
the optimization target is that the product of the delay T and the area A of the FPGA island is minimum, the emphasis on the delay and the area is determined by adjusting the weight alpha, Si is the size of each transistor in the circuit, and 1< ═ i < ═ l, so that the size adjustment of the transistors can be realized quickly;
the key parameters refer to 8 architecture parameters, which are respectively: width W of wiring channel, number N of basic logic units in logic block, input number K of look-up table LUT, line length L, input number I of logic block, flexibility F of switch blocksNumber of routing tracks F to which the input pins of the logic block can be connectedcinNumber of routing tracks F to which the output pins of the logic block can be connectedcout(ii) a The sub-circuit delay of the FPGA is represented in the form of equation (1):
Tn=fn(N,K,W,L,I,Fs,Fcin,Fcout,S1,...,Sl) (1)
wherein, TnRepresenting the delay of the FPGA sub-circuit n, 1<=n<=7,SiIs the transistor size of the sub-circuit n;
switching block Elmore delay model:
TSB=RSBdrv2*(Cj,SBdrv2+Fs*Cj,SBmux1+Fcin*0.5*I*Cj,CBmux1)+(RSBdrv2+Rj,SBmux1)*(Cj,SBmux1+Cj,SBmux2)+(RSBdrv2+Rj,CBmux1+Rj,CBmux2)*(Cj,SBmux2+Cg,SBdrv1)+0.69*RSBdrv1*(Cj,SBdrv1+Cg,SBdrv2) (2)
wherein, Cj,SBmux1,Cj,SBmux2Junction capacitances, C, of a primary transistor and a secondary transistor in the switch block multiplexer, respectivelyg,SBdrv1、Cg,SBdrv2Is the gate capacitance of the transistor in the switch block buffer, Cj,SBdrv1、Cj,SBdrv2Is the junction capacitance of the transistor in the switch block buffer, Cj,CBmux1Is the transistor junction capacitance of the connecting block multiplexer:
the KBNN time-delay model structure comprises a multilayer perceptron MLP (Multi layer Perceptron) neural network and a knowledge neuron, the input parameter in the formula (1) determines the number of input neurons in the KBNN structure, and the input gamma of each hidden neuroniThe sum of the weights of the input parameters is adopted as an activation function in the hidden neurons, the sigmoid function is adopted as an output neuron of the 3-layer MLP, the weighted sum of the output of the hidden neurons is adopted as the output of the 3-layer MLP, the output of the 3-layer MLP is the difference between an estimated value and a real value of delay, the knowledge neuron is an established FPGA delay model based on Elmore, and the output of KBNN is the sum of the output of the 3-layer MLP and the knowledge neuron.
CN201611105208.5A 2016-12-05 2016-12-05 FPGA transistor size adjusting method Expired - Fee Related CN106776442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611105208.5A CN106776442B (en) 2016-12-05 2016-12-05 FPGA transistor size adjusting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611105208.5A CN106776442B (en) 2016-12-05 2016-12-05 FPGA transistor size adjusting method

Publications (2)

Publication Number Publication Date
CN106776442A CN106776442A (en) 2017-05-31
CN106776442B true CN106776442B (en) 2020-11-06

Family

ID=58874115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611105208.5A Expired - Fee Related CN106776442B (en) 2016-12-05 2016-12-05 FPGA transistor size adjusting method

Country Status (1)

Country Link
CN (1) CN106776442B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742051B (en) * 2017-11-16 2021-04-30 复旦大学 Method for quickly optimizing size of FPGA circuit transistor
US20220138570A1 (en) * 2020-11-05 2022-05-05 Mediatek Inc. Trust-Region Method with Deep Reinforcement Learning in Analog Design Space Exploration
CN115392166B (en) * 2022-10-24 2023-01-20 北京智芯微电子科技有限公司 Transistor width determination method and device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013181664A1 (en) * 2012-06-01 2013-12-05 The Regents Of The University Of California Programmable logic circuit architecture using resistive memory elements
CN103678745A (en) * 2012-09-18 2014-03-26 中国科学院微电子研究所 Cross-platform multilevel integrated design system for FPGA (field programmable gate array)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013181664A1 (en) * 2012-06-01 2013-12-05 The Regents Of The University Of California Programmable logic circuit architecture using resistive memory elements
CN103678745A (en) * 2012-09-18 2014-03-26 中国科学院微电子研究所 Cross-platform multilevel integrated design system for FPGA (field programmable gate array)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Feedforward Neural Network Models for FPGA Routing Channel Width Estimation;刘强,高明等;《Chinese Journal of Electronics》;20160131;第25卷(第1期);全文 *

Also Published As

Publication number Publication date
CN106776442A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
Wang et al. Learning to design circuits
Xia et al. MNSIM: Simulation platform for memristor-based neuromorphic computing system
Zhang et al. Design guidelines of RRAM based neural-processing-unit: A joint device-circuit-algorithm analysis
US9026964B2 (en) Intelligent metamodel integrated Verilog-AMS for fast and accurate analog block design exploration
US8726211B2 (en) Generating an equivalent waveform model in static timing analysis
CN106776442B (en) FPGA transistor size adjusting method
KR20200119192A (en) System and method for compact neural network modeling of transistors
US20060107244A1 (en) Method for designing semiconductor intgrated circuit and system for designing the same
Ayala et al. Efficient hardware implementation of radial basis function neural network with customized-precision floating-point operations
Raitza et al. Quantitative characterization of reconfigurable transistor logic gates
Sasikumar et al. Operational amplifier circuit sizing based on NSGA-II and particle swarm optimization
Mukhopadhyay et al. Modeling and design of a nano scale cmos inverter for symmetric switching characteristics
Malhotra et al. Implementation of AI in the field of VLSI: A Review
Nasser et al. Power modeling on FPGA: A neural model for RT-level power estimation
TW202240455A (en) Poly-bit cells
Shah et al. Aspect ratio estimation for MOS amplifier using machine learning
CN116438536A (en) Modeling timing behavior using extended sensitivity data of physical parameters
CN110738014A (en) Method for determining key process fluctuation in statistical analysis of time sequence circuits
Elsiginy et al. A novel hybrid analog design optimizer with particle swarm optimization and modern deep neural networks
Chen et al. Current source model of combinational logic gates for accurate gate-level circuit analysis and timing analysis
US11663384B1 (en) Timing modeling of multi-stage cells using both behavioral and structural models
Kim et al. A Speculative Divide-and-Conquer Optimization Method for Large Analog/Mixed-Signal Circuits: A High-Speed FFE SST Transmitter Example
Jaiswal et al. Netlist optimization for CMOS place and route in microwind
Dhabak et al. Adaptive sampling algorithm for ANN-based performance modeling of nano-scale CMOS inverter
Ho et al. Automated design optimization for CMOS rectifier using deep neural network (DNN)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201106

Termination date: 20211205

CF01 Termination of patent right due to non-payment of annual fee