TWI427637B - Non-volatile memory with background data latch caching during program operations and methods therefor - Google Patents

Non-volatile memory with background data latch caching during program operations and methods therefor Download PDF

Info

Publication number
TWI427637B
TWI427637B TW96115926A TW96115926A TWI427637B TW I427637 B TWI427637 B TW I427637B TW 96115926 A TW96115926 A TW 96115926A TW 96115926 A TW96115926 A TW 96115926A TW I427637 B TWI427637 B TW I427637B
Authority
TW
Taiwan
Prior art keywords
memory
data
page
job
read
Prior art date
Application number
TW96115926A
Other languages
Chinese (zh)
Other versions
TW200809862A (en
Inventor
Yan Li
Original Assignee
Sandisk Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/382,006 external-priority patent/US7505320B2/en
Application filed by Sandisk Technologies Inc filed Critical Sandisk Technologies Inc
Publication of TW200809862A publication Critical patent/TW200809862A/en
Application granted granted Critical
Publication of TWI427637B publication Critical patent/TWI427637B/en

Links

Landscapes

  • Read Only Memory (AREA)

Description

在程式執行期間具有背景資料鎖存快取的非揮發性記憶體及方法Non-volatile memory and method with background data latch cache during program execution

本發明大體而言係關於諸如電可抹除可程式化唯讀記憶體(EEPROM)及快閃EEPROM之非揮發性半導體記憶體,且特定言之係關於基於允許重疊記憶體作業的共用鎖存結構之快取作業。The present invention relates generally to non-volatile semiconductor memories such as electrically erasable programmable read only memory (EEPROM) and flash EEPROM, and in particular to shared latches based on allowing overlapping memory jobs. Structured cache operations.

能夠進行電荷之非揮發性儲存,特定言之採取封裝為較小形狀因數之卡的EEPROM及快閃EEPROM之形式之固態記憶體近來已成為多種行動及掌上型裝置(尤其為資訊設備及消費型電子產品)中所選之儲存器。不同於亦為固態記憶體之RAM(隨機存取記憶體),快閃記憶體為非揮發性的,即使在斷開功率之後仍保持其儲存之資料。儘管成本較高,但快閃記憶體愈來愈多地使用於大量儲存應用中。基於諸如硬碟機及軟性磁碟之旋轉磁性媒體之習知大量儲存器不適於行動及掌上型環境。此係由於硬碟機傾向於體積較大,易於產生機械故障且具有較高潛時及較高功率要求。此等不合需要之屬性使得基於碟片之儲存器在大多數行動及攜帶型應用設備中不實用。另一方面,嵌埋式及採取抽取式卡之形式之快閃記憶體由於其較小大小、較低功率消耗、較高速度及較高可靠性之特徵而理想地適於行動及掌上型環境中。Non-volatile storage capable of charge storage, in particular in the form of EEPROMs and flash EEPROMs packaged as cards with smaller form factors, has recently become a variety of mobile and handheld devices (especially for information devices and consumer devices) The storage selected in the electronic product). Unlike RAM (random access memory), which is also a solid-state memory, flash memory is non-volatile, retaining its stored data even after power is turned off. Despite the higher cost, flash memory is increasingly used in mass storage applications. Conventional mass storage based on rotating magnetic media such as hard disk drives and flexible disks is not suitable for mobile and palm-sized environments. This is due to the fact that the hard disk drive tends to be bulky, prone to mechanical failure and has higher latency and higher power requirements. These undesirable attributes make disc-based storage impractical in most mobile and portable applications. On the other hand, flash memory in the form of embedded and removable cards is ideally suited for mobile and palm-sized environments due to its small size, low power consumption, high speed and high reliability. in.

EEPROM及電可程式化唯讀記憶體(EPROM)為可經抹除且由新的資料寫入或"程式化"入記憶體單元之非揮發性記憶體。兩者均利用場效電晶體結構中定位於半導體基板中源極與汲極區域之間的通道區域上之浮動(未連接)傳導閘極。接著在浮動閘極上提供控制閘極。由保持於浮動閘極上之電荷之量來控制電晶體之臨限電壓特徵。亦即,對於浮動閘極上之給定量之電荷,存在必須於"接通"電晶體以允許其源極與汲極區域之間的傳導之前施加至控制閘極之相應電壓(臨限)。EEPROM and electrically programmable read only memory (EPROM) are non-volatile memories that can be erased and written or "programmed" into memory cells by new data. Both utilize a floating (unconnected) conductive gate positioned in the channel region between the source and drain regions of the semiconductor substrate in the field effect transistor structure. A control gate is then provided on the floating gate. The threshold voltage characteristics of the transistor are controlled by the amount of charge held on the floating gate. That is, for a given amount of charge on the floating gate, there is a corresponding voltage (premise) that must be applied to the control gate before "turning on" the transistor to allow conduction between its source and drain regions.

浮動閘極可固持一定範圍之電荷且因此可經程式化為臨限電壓窗內之任何臨限電壓位準。由裝置之最小及最大臨限位準而對臨限電壓窗之大小定界,該等臨限位準又對應於可程式化至浮動閘極上之電荷之範圍。臨限窗一般視記憶體裝置之特徵、操作條件及歷史而定。窗內之每一獨特、可鑑臨限電壓位準範圍可(原則上)用以表示單元之明確記憶體狀態。The floating gate holds a range of charges and can therefore be programmed to any threshold voltage level within the threshold voltage window. The threshold voltage window is delimited by the minimum and maximum threshold levels of the device, which in turn correspond to the range of charge that can be programmed onto the floating gate. The threshold window is generally determined by the characteristics, operating conditions and history of the memory device. Each unique, thresholdable voltage level range within the window can (in principle) be used to indicate the unit's explicit memory state.

通常藉由兩個機制中之一者將用作記憶體單元之電晶體程式化為"程式化"狀態。在"熱電子注入"中,施加至汲極之較高電壓促進電子跨越基板通道區域。同時,施加至控制閘極之較高電壓經由較薄閘極介電質將熱電子拉至浮動閘極上。在"穿隧注入"中,相對於基板向控制閘極施加較高電壓。以此方式,將電子自基板拉至***之浮動閘極。A transistor used as a memory cell is typically programmed into a "stylized" state by one of two mechanisms. In "hot electron injection", the higher voltage applied to the drain promotes electrons across the substrate channel region. At the same time, the higher voltage applied to the control gate pulls the hot electrons onto the floating gate via the thinner gate dielectric. In "tunneling implantation", a higher voltage is applied to the control gate relative to the substrate. In this way, electrons are pulled from the substrate to the inserted floating gate.

可藉由許多機制而抹除記憶體裝置。對於EPROM而言,藉由以紫外輻射將電荷自浮動閘極移除而可整體抹除記憶體。對於EEPROM而言,藉由相對於控制閘極施加較高之電壓至基板以誘發浮動閘極中之電子經由較薄氧化物而穿隧至基板通道區域(亦即,福勒-諾德海姆穿隧)而可電抹除記憶體單元。通常,可逐位元組地抹除EEPROM。對於快閃EEPROM而言,可一次全部或每次一或多個區塊地電抹除記憶體,其中一區塊可由記憶體之512或512以上之位元組組成。The memory device can be erased by a number of mechanisms. For EPROMs, the memory can be completely erased by removing the charge from the floating gate with ultraviolet radiation. For EEPROM, a higher voltage is applied to the substrate relative to the control gate to induce electrons in the floating gate to tunnel through the thinner oxide to the substrate channel region (ie, Fowler-Nordheim The memory unit can be erased by tunneling. Typically, the EEPROM can be erased bit by bit. For flash EEPROM, the memory can be erased all at once or one or more blocks at a time, and one block can be composed of 512 or more bytes of memory.

非揮發性記憶體單元之實例Example of a non-volatile memory unit

記憶體裝置通常包含可安裝於一卡上之一或多個記憶體晶片。每一記憶體晶片包含由諸如解碼器及抹除、寫入及讀取電路之周邊電路支援的記憶體單元之一陣列。較為尖端之記憶體裝置亦與執行智慧型及較高級別之記憶體作業及介面連接之控制器一同提供。現今正使用許多商業成功之非揮發性固態記憶體裝置。此等記憶體裝置可使用不同類型之記憶體單元,每一類型具有一或多個電荷儲存元件。A memory device typically includes one or more memory chips mountable on a card. Each memory chip contains an array of memory cells supported by peripheral circuits such as decoders and erase, write and read circuits. More sophisticated memory devices are also available with controllers that perform intelligent and higher-level memory operations and interface connections. Many commercially successful non-volatile solid state memory devices are being used today. These memory devices can use different types of memory cells, each type having one or more charge storage elements.

圖1A至圖1E示意地說明非揮發性記憶體單元之不同實例。Figures 1A through 1E schematically illustrate different examples of non-volatile memory cells.

圖1A示意地說明採取具有用於儲存電荷之浮動閘極的EEPROM單元之形式之非揮發性記憶體。電可抹除及可程式化唯讀記憶體(EEPROM)具有與EPROM類似之結構,但另外提供用於在適當電壓之施加下電裝載電荷至其浮動閘極及自其浮動閘極電移除電荷(無需暴露於UV輻射)之機制。美國專利第5,595,924號中給出該等單元之實例及製造其之方法。Figure 1A schematically illustrates a non-volatile memory in the form of an EEPROM cell having a floating gate for storing charge. The electrically erasable and programmable read only memory (EEPROM) has a similar structure to the EPROM, but is additionally provided for electrically charging the charge to its floating gate and electrically removing it from its floating gate under the application of an appropriate voltage. The mechanism of charge (no need to be exposed to UV radiation). Examples of such units and methods of making the same are given in U.S. Patent No. 5,595,924.

圖1B示意地說明具有選擇閘極及控制或操縱閘極之快閃EEPROM單元。記憶體單元10具有源極14與汲極16擴散之間的"分離通道"12。由串聯之兩個電晶體T1及T2有效地形成單元。T1用作具有浮動閘極20及控制閘極30之記憶體電晶體。浮動閘極能夠儲存可選量之電荷。可流過通道之T1部分之電荷的量視控制閘極30上之電壓及常駐於***之浮動閘極20上之電荷的量而定。T2用作具有選擇閘極40之選擇電晶體。當藉由選擇閘極40處之電壓而接通T2時,其允許通道之T1部分中之電流穿過源極與汲極之間。選擇電晶體提供沿源極-汲極通道,獨立於控制閘極處之電壓之開關。一優勢在於其可用以斷開於零控制閘極電壓下仍執行之彼等單元,零控制閘極電壓係歸因於該等單元在其浮動閘極處之電荷耗盡(正)。另一優勢在於其使得能夠較易於實施源極側注入程式化。Figure 1B schematically illustrates a flash EEPROM cell having a select gate and a control or steering gate. The memory cell 10 has a "separation channel" 12 between the source 14 and the diffusion of the drain 16. The cells are effectively formed by two transistors T1 and T2 connected in series. T1 is used as a memory transistor having a floating gate 20 and a control gate 30. The floating gate is capable of storing an optional amount of charge. The amount of charge that can flow through the T1 portion of the channel depends on the voltage on the control gate 30 and the amount of charge that is resident on the inserted floating gate 20. T2 is used as the selection transistor with the selected gate 40. When T2 is turned on by selecting the voltage at the gate 40, it allows the current in the T1 portion of the channel to pass between the source and the drain. The selection transistor provides a switch along the source-drain channel that is independent of the voltage at the control gate. One advantage is that it can be used to disconnect all of the cells that are still operating at zero control gate voltages due to the charge depletion (positive) of the cells at their floating gates. Another advantage is that it makes it easier to implement source side injection stylization.

分離通道記憶體單元之一簡單實施例為選擇閘極與控制閘極連接至如藉由圖1B所示之虛線而示意地指示之同一字線之記憶體單元。此藉由將電荷儲存元件(浮動閘極)定位於通道之一部分上方且將控制閘極結構(其為字線之部分)定位於另一通道部分以及電荷儲存元件上方而完成。此藉由串聯之兩個電晶體有效地形成單元,一電晶體(記憶體電晶體)具有電荷儲存元件上之電荷之量與字線上之電壓之組合,該組合控制可流過其通道之部分的電流之量,且另一電晶體(選擇電晶體)具有單獨用作其閘極之字線。美國專利第5,070,032、5,095,344、5,315,541、5,343,063及5,661,053號中給出該等單元之實例、其在記憶體系統中之使用及製造其之方法。A simple embodiment of the split channel memory cell is a memory cell in which the select gate and control gate are connected to the same word line as indicated schematically by the dashed line shown in FIG. 1B. This is accomplished by positioning the charge storage element (floating gate) over a portion of the channel and positioning the control gate structure (which is part of the word line) over the other channel portion and over the charge storage element. This effectively forms a cell by two transistors in series, one transistor (memory transistor) having a combination of the amount of charge on the charge storage element and the voltage on the word line, the combination control flowing through portions of its channel The amount of current, and another transistor (selective transistor) has a word line that acts solely as its gate. Examples of such units, their use in memory systems, and methods of making the same are given in U.S. Patent Nos. 5,070,032, 5,095,344, 5,315,541, 5,343,063, and 5,661,053.

圖1B所示之分離通道單元之一較為精細之實施例為選擇閘極與控制閘極獨立且不由其間的虛線相連接之實施例。一實施具有單元之一陣列中的一行控制閘極,其連接至垂直於字線之控制(或操縱)線。效應為使得字線無需在讀取或程式化所選單元時同時執行兩個功能。彼等兩個功能為(1)用作選擇電晶體之閘極,因此要求合適電壓以接通及斷開選擇電晶體,及(2)經由耦合於字線與電荷儲存元件之間的電場(電容性的)而將電荷儲存元件之電壓驅動至所要位準。常難以藉由單一電壓而以最佳方式執行此等功能之兩者。藉由對控制閘極與選擇閘極之單獨控制,字線僅需執行功能(1),而所添加之控制線執行功能(2)。此能力允許較高效能程式化之設計,其中使程式化電壓適應目標資料。獨立控制(或操縱)閘極於快閃EEPROM陣列中之使用描述於(例如)美國專利第5,313,421及6,222,762號中。One of the more elaborate embodiments of the split channel unit shown in FIG. 1B is an embodiment in which the select gate is independent of the control gate and is not connected by a dashed line therebetween. One implementation has a row of control gates in an array of cells connected to a control (or steering) line that is perpendicular to the word line. The effect is that the word line does not need to perform two functions simultaneously while reading or stylizing the selected unit. These two functions are (1) used to select the gate of the transistor, thus requiring a suitable voltage to turn the select transistor on and off, and (2) via an electric field coupled between the word line and the charge storage element ( Capacitively, the voltage of the charge storage element is driven to the desired level. It is often difficult to perform both of these functions in an optimal manner by a single voltage. With separate control of the control gate and the select gate, the word line only needs to perform the function (1), and the added control line performs the function (2). This capability allows for a higher performance stylized design in which the stylized voltage is adapted to the target data. The use of an independent control (or manipulation) gate in a flash EEPROM array is described in, for example, U.S. Patent Nos. 5,313,421 and 6,222,762.

圖1C示意地說明具有雙浮動閘極及獨立的選擇及控制閘極之另一快閃EEPROM單元。記憶體單元10類似於圖1B之記憶體單元,除了其有效地具有串聯之三個電晶體。在此類型之單元中,在其於源極與汲極擴散之間的通道上包括兩個儲存元件(亦即,T1-左及T1-右之儲存元件)連同該兩個儲存元件之間的選擇電晶體T1。記憶體電晶體分別具有浮動閘極20及20',以及控制閘極30及30'。由選擇閘極40來控制選擇電晶體T2。任何時候僅存取該對記憶體電晶體中之一者用於讀取或寫入。當存取儲存單位T1-左時,接通T2及T1-右以允許通道的T1-左之部分中之電流通過源極與汲極之間。類似地,當存取儲存單位T1-右時,接通T2及T1-左。藉由使選擇閘極多晶矽之一部分接近於浮動閘極及向選擇閘極施加大量正電壓(例如,20 V)以使得儲存於浮動閘極內之電子可穿隧至選擇閘極多晶矽而實現抹除。Figure 1C schematically illustrates another flash EEPROM cell having dual floating gates and separate select and control gates. The memory unit 10 is similar to the memory unit of Figure 1B except that it effectively has three transistors in series. In this type of unit, two storage elements (ie, T1-left and T1-right storage elements) are included on the channel between the source and the drain diffusion, along with the storage element. Select transistor T1. The memory transistors have floating gates 20 and 20', respectively, and control gates 30 and 30'. The selection transistor T2 is controlled by the selection gate 40. Only one of the pair of memory transistors is accessed for reading or writing at any time. When accessing the storage unit T1-left, T2 and T1-right are turned on to allow current in the T1-left portion of the channel to pass between the source and the drain. Similarly, when accessing the storage unit T1-right, T2 and T1-left are turned on. By applying a portion of the selected gate polysilicon close to the floating gate and applying a large amount of positive voltage (eg, 20 V) to the selected gate so that electrons stored in the floating gate can tunnel to the selective gate polysilicon except.

圖1D示意地說明經組織化為NAND單元之一串記憶體單元。NAND單元50由一連串記憶體電晶體M1、M2......Mn(n=4、8、16或16以上)組成,該等記憶體電晶體藉由其源極及汲極而經菊式鏈接。一對選擇電晶體S1、S2控制記憶體電晶體鏈經由NAND單元之源極端子54及汲極端子56與外部之連接。在記憶體陣列中,當接通源極選擇電晶體S1時,源極端子耦接至源極線。類似地,當接通汲極選擇電晶體S2時,NAND單元之汲極端子耦接至記憶體陣列之位元線。鏈中之每一記憶體電晶體具有一電荷儲存元件以儲存給定量之電荷從而表示所欲之記憶體狀態。每一記憶體電晶體之控制閘極提供對讀取及寫入作業之控制。選擇電晶體S1、S2中之每一者之控制閘極分別經由NAND單元之源極端子54及汲極端子56而提供對NAND單元之控制存取。FIG. 1D schematically illustrates a string memory cell organized into a NAND cell. The NAND cell 50 is composed of a series of memory transistors M1, M2, ... Mn (n = 4, 8, 16, or 16 or more), and the memory transistors pass through the source and the drain Link. A pair of selection transistors S1, S2 controls the connection of the memory transistor chain to the outside via source terminal 54 and gate terminal 56 of the NAND cell. In the memory array, when the source selection transistor S1 is turned on, the source terminal is coupled to the source line. Similarly, when the drain select transistor S2 is turned on, the drain terminal of the NAND cell is coupled to the bit line of the memory array. Each memory transistor in the chain has a charge storage element to store a given amount of charge to indicate the desired state of the memory. The control gate of each memory transistor provides control of read and write operations. The control gates of each of the selected transistors S1, S2 provide controlled access to the NAND cells via the source terminal 54 and the drain terminal 56 of the NAND cell, respectively.

當在程式化期間讀取並驗證NAND單元內之經定址之記憶體電晶體時,向其控制閘極供應一適當電壓。同時,NAND單元50中未經定址之記憶體電晶體的剩餘部分藉由向其控制閘極施加充足電壓而經完全接通。以此方式,自個別記憶體電晶體之源極至NAND單元之源極端子54及同樣地自個別記憶體電晶體之汲極至單元之汲極端子56而形成之傳導路徑為有效的。美國專利第5,570,315、5,903,495、6,046,935號中描述具有該等NAND單元結構之記憶體裝置。When the addressed memory transistor within the NAND cell is read and verified during the stylization, an appropriate voltage is supplied to its control gate. At the same time, the remainder of the unaddressed memory transistor in NAND cell 50 is fully turned on by applying sufficient voltage to its control gate. In this manner, the conduction paths formed from the source of the individual memory transistors to the source terminals 54 of the NAND cells and likewise from the drains of the individual memory transistors to the gate terminals 56 of the cells are effective. A memory device having the NAND cell structure is described in U.S. Patent Nos. 5,570,315, 5,903,495, 6,046,935.

圖1E示意地說明具有用於儲存電荷之介電層之非揮發性記憶體。替代早先描述之傳導浮動閘極元件而使用介電層。利用介電質儲存元件之該等記憶體裝置已由Eitan等人於"NROM:A Novel Localized Trapping,2-Bit Nonvolatile Memory Cell," IEEE Electron Device Letters,第21卷,第11號,2000年11月,第543-545頁中描述。ONO介電層延伸跨越源極與汲極擴散之間的通道。將一資料位元之電荷定位於介電層中鄰近於汲極處,且將另一資料位元之電荷定位於介電層中鄰近於源極處。舉例而言,美國專利第5,768,192及6,011,725號揭示一具有夾於兩個二氧化矽層之間的捕集介電質之非揮發性記憶體單元。藉由分別讀取介電質內之空間分離電荷儲存區域之二元狀態來實施多狀態資料儲存。Figure 1E schematically illustrates a non-volatile memory having a dielectric layer for storing charge. A dielectric layer is used instead of the conductive floating gate element described earlier. Such memory devices utilizing dielectric storage elements have been described by Eitan et al. in "NROM: A Novel Localized Trapping, 2-Bit Nonvolatile Memory Cell," IEEE Electron Device Letters, Vol. 21, No. 11, 2000, 11 Month, as described on pages 543-545. The ONO dielectric layer extends across the channel between the source and the drain diffusion. A charge of a data bit is positioned adjacent to the drain in the dielectric layer and a charge of the other data bit is positioned in the dielectric layer adjacent to the source. For example, U.S. Patent Nos. 5,768,192 and 6,011,725 disclose a non-volatile memory cell having a dielectric sandwiched between two layers of ceria. Multi-state data storage is implemented by separately reading the binary states of the spatially separated charge storage regions within the dielectric.

記憶體陣列Memory array

記憶體裝置通常包含排列為列及行且可藉由字線及位元線定址之記憶體單元的二維陣列。可根據NOR型或NAND型架構而形成陣列。A memory device typically includes a two-dimensional array of memory cells arranged in columns and rows and addressable by word lines and bit lines. The array can be formed according to a NOR type or NAND type architecture.

NOR陣列NOR array

圖2說明記憶體單元之NOR陣列之一實例。已藉由圖1B或圖1C中所說明之類型的單元而實施具有NOR型架構之記憶體裝置。每一列記憶體單元藉由其源極及汲極以菊式鏈接之方式而連接。有時將此設計稱作虛接地設計。每一記憶體單元10具有源極14、汲極16、控制閘極30及選擇閘極40。一列中之單元使其選擇閘極連接至字線42。一行中之單元使其源極及汲極分別連接至所選位元線34及36。在獨立控制記憶體單元之控制閘極及選擇閘極之一些實施例中,操縱線36亦連接一行中之單元之控制閘極。Figure 2 illustrates an example of a NOR array of memory cells. A memory device having a NOR type architecture has been implemented by a unit of the type illustrated in FIG. 1B or FIG. 1C. Each column of memory cells is connected in a daisy chain by its source and drain. This design is sometimes referred to as a virtual ground design. Each memory cell 10 has a source 14, a drain 16, a control gate 30, and a select gate 40. The cells in a column have their select gates connected to word line 42. The cells in a row have their source and drain connected to the selected bit lines 34 and 36, respectively. In some embodiments of the control gate and select gate of the independent control memory cell, the steering line 36 also connects the control gates of the cells in a row.

許多快閃EEPROM裝置藉由記憶體單元而實施,其中每一記憶體單元由其連接至一起之控制閘極及選擇閘極而形成。在此情形中,不需要操縱線且字線簡單地連接沿每一列之單元之所有控制閘極及選擇閘極。美國專利第5,172,338及5,418,752號中揭示此等設計之實例。在此等設計中,字線本質上執行兩個功能:列選擇及向列中之所有單元供應控制閘極電壓以進行讀取或程式化。Many flash EEPROM devices are implemented by a memory cell in which each memory cell is formed by its associated control gate and select gate. In this case, there is no need to manipulate the lines and the word lines simply connect all of the control gates and select gates of the cells along each column. Examples of such designs are disclosed in U.S. Patent Nos. 5,172,338 and 5,418,752. In these designs, the word line essentially performs two functions: column selection and supply of control gate voltages to all cells in the column for reading or stylization.

NAND陣列NAND array

圖3說明諸如圖1D所示的記憶體單元之NAND陣列之一實例。沿NAND單元之每一行,位元線耦接至每一NAND單元之汲極端子56。沿NAND單元之每一列,源極線可連接該等單元之所有源極端子54。沿一列之NAND單元之控制閘極亦連接至一連串相應字線。可藉由接通該對選擇電晶體(見圖1D)(藉由經由所連接之字線而施加於其控制閘極上之適當電壓)而定址一整列NAND單元。在讀取NAND單元之鏈內之一記憶體電晶體時,鏈中之剩餘記憶體電晶體經由其相關聯之字線而經硬接通以使得流過該鏈之電流本質上視儲存於所讀取之單元中之電荷的位準而定。在美國專利第5,570,315、5,774,397及6,046,935號中找到NAND架構陣列之一實例及其作為記憶體系統之部分之作業。Figure 3 illustrates an example of a NAND array such as the memory cell shown in Figure 1D. Along the NAND cell, the bit line is coupled to the 汲 terminal 56 of each NAND cell. Along the NAND cell, the source lines can connect all of the source terminals 54 of the cells. The control gates of the NAND cells along a column are also connected to a series of corresponding word lines. An entire column of NAND cells can be addressed by turning on the pair of select transistors (see Figure 1D) (by applying the appropriate voltage across their control gates via the connected word lines). Upon reading a memory transistor within the chain of NAND cells, the remaining memory transistors in the chain are hard-wired via their associated word lines such that the current flowing through the chain is essentially stored in the The level of charge in the cell being read depends on the level of charge. An example of a NAND architecture array and its operation as part of a memory system are found in U.S. Patent Nos. 5,570,315, 5,774,397 and 6,046,935.

區塊抹除Block erase

電荷儲存記憶體裝置之程式化可僅導致向其電荷儲存元件添加更多電荷。因此,在程式作業之前,必須移除(或抹除)電荷儲存元件中之現有電荷。提供抹除電路(未圖示)以抹除記憶體單元之一或多個區塊。當一同(亦即,瞬間)電抹除單元之整個陣列或陣列之單元之重要組時,將諸如EEPROM之非揮發性記憶體稱作"快閃"EEPROM。一旦經抹除,則單元之組可經再程式化。可一同抹除之單元之組可由一或多個可定址抹除單位組成。抹除單位或區塊通常儲存一或多頁資料,頁為程式化及讀取之單位,但可在單一作業中程式化或讀取一個以上頁。每一頁通常儲存一或多個扇區之資料,藉由主機系統來界定扇區之大小。一實例為具有512位元組之使用者資料(其遵循關於磁碟驅動器而建立之標準)加上某一數目之位元組的關於使用者資料及/或其儲存於之區塊之附加項資訊之扇區。Stylization of the charge storage memory device can only result in adding more charge to its charge storage element. Therefore, the existing charge in the charge storage element must be removed (or erased) prior to program operation. An erase circuit (not shown) is provided to erase one or more of the memory cells. A non-volatile memory such as an EEPROM is referred to as a "flash" EEPROM when the entire array of cells or an important group of cells of the array is electrically erased together (i.e., instantaneously). Once erased, the group of cells can be reprogrammed. A group of units that can be erased together can be composed of one or more addressable erase units. Erasing units or blocks usually stores one or more pages of data. Pages are stylized and read units, but can be programmed or read more than one page in a single job. Each page typically stores data for one or more sectors, and the size of the sector is defined by the host system. An example is a user data with 512 bytes (which follows the criteria established for the disk drive) plus a certain number of bytes of additional information about the user data and/or the blocks it stores in it. The sector of information.

讀取/寫入電路Read/write circuit

在常見雙態EEPROM單元中,建立至少一電流斷點位準以將執行窗分割為兩個區域。當藉由施加預定、固定電壓而讀取單元時,其源極/汲極電流藉由與斷點位準(或參考電流IREF )比較而轉變為記憶體狀態。若電流讀數高於斷點位準之讀數,則判定單元處於一邏輯狀態(例如,"零"狀態)。另一方面,若電流小於斷點位準之電流,則判定單元處於另一邏輯狀態(例如,"一"狀態)。因此,該雙態單元儲存一位元之數位資訊。常提供可於外部可程式化之參考電流源作為記憶體系統之部分以產生斷點位準電流。In a typical two-state EEPROM cell, at least one current breakpoint level is established to divide the execution window into two regions. When a cell is read by applying a predetermined, fixed voltage, its source/drain current is converted to a memory state by comparison with a breakpoint level (or reference current I REF ). If the current reading is above the breakpoint level, then the decision unit is in a logic state (eg, a "zero" state). On the other hand, if the current is less than the current at the breakpoint level, then the decision unit is in another logic state (eg, a "one" state). Therefore, the binary unit stores digital information of one bit. A reference current source that can be externally programmable is often provided as part of the memory system to generate a breakpoint level current.

為了增大記憶體容量,隨半導體技術之狀態進步而製造具有愈來愈高之密度的快閃EEPROM裝置。增大儲存容量之另一方法為使得每一記憶體單元儲存兩個以上狀態。In order to increase the memory capacity, a flash EEPROM device having an increasingly higher density is manufactured as the state of semiconductor technology progresses. Another way to increase the storage capacity is to have more than two states stored in each memory unit.

對於多狀態或多位準EEPROM記憶體單元,藉由一個以上斷點將執行窗分割為兩個以上區域以使得每一單元能夠儲存一個以上位元之資料。給定EEPROM陣列可儲存之資訊因此隨每一單元可儲存之狀態之數目而增加。美國專利第5,172,338號中已描述具有多狀態或多位準記憶體單元之EEPROM或快閃EEPROM。For multi-state or multi-bit quasi-EEPROM memory cells, the execution window is split into more than two regions by more than one breakpoint so that each cell can store more than one bit of data. The information that can be stored for a given EEPROM array is therefore increased with the number of states that each unit can store. An EEPROM or flash EEPROM with multi-state or multi-bit memory cells has been described in U.S. Patent No. 5,172,338.

實務上,通常藉由在向控制閘極施加參考電壓時感應跨越單元之源極與汲極電極之傳導電流而讀取單元之記憶體狀態。因此,對於單元之浮動閘極上之每一給定電荷,可偵測到相對於固定參考控制閘極電壓之相應傳導電流。類似地,可程式化至浮動閘極上之電荷之範圍界定相應臨限電壓窗或相應傳導電流窗。In practice, the memory state of the cell is typically read by sensing the conduction current across the source and drain electrodes of the cell when a reference voltage is applied to the control gate. Thus, for each given charge on the floating gate of the cell, a corresponding conduction current relative to the fixed reference control gate voltage can be detected. Similarly, the range of charge that can be programmed onto the floating gate defines a corresponding threshold voltage window or corresponding conduction current window.

或者,替代偵測經分割之電流窗中之傳導電流,在控制閘極處之測試中可能對於給定記憶體狀態設定臨限電壓且偵測傳導電流是否低於或高於臨限電流。在一實施中,藉由檢查傳導電流經由位元線之電容而放電之速率來完成對傳導電流相對於臨限電流之偵測。Alternatively, instead of detecting the conduction current in the divided current window, a threshold voltage may be set for a given memory state in the test at the control gate and the conduction current is detected to be lower or higher than the threshold current. In one implementation, the detection of the conduction current relative to the threshold current is accomplished by examining the rate at which the conduction current is discharged through the capacitance of the bit line.

圖4說明對於浮動閘極於任一時間可選擇性地儲存之四個不同電荷Q1-Q4的源極-汲極電流ID 與控制閘極電壓VCG 之間的關係。四個實線ID 與VCG 之關係曲線表示可經程式化至記憶體單元之浮動閘極上的四個可能電荷位準,其分別對應於四個可能記憶體狀態。作為一實例,一定數目之單元之臨限電壓窗可在0.5 V至3.5 V之範圍中。可藉由以各0.5 V之間隔將臨限窗分割為五個區域而劃分六個記憶體狀態。舉例而言,若如圖示而使用2 μA之參考電流IREF ,則可將以Q1而程式化之單元視作處於記憶體狀態"1"中,因為其曲線與IREF 相交於臨限窗之藉由VCG =0.5 V與1.0 V所劃分之區域中。類似地,Q4處於記憶體狀態"5"中。Figure 4 illustrates the relationship between the source-drain current I D and the control gate voltage V CG for four different charges Q1-Q4 that are selectively stored by the floating gate at any one time. The relationship between the four solid lines I D and V CG represents four possible charge levels that can be programmed onto the floating gate of the memory cell, which correspond to four possible memory states, respectively. As an example, a threshold voltage window for a certain number of cells can range from 0.5 V to 3.5 V. The six memory states can be divided by dividing the threshold window into five regions at intervals of 0.5 V. For example, if a reference current I REF of 2 μA is used as shown, the unit programmed with Q1 can be considered to be in the memory state "1" because its curve intersects I REF in the threshold window. It is in the region divided by V CG = 0.5 V and 1.0 V. Similarly, Q4 is in the memory state "5".

如自上文之描述可見,使記憶體單元儲存愈多狀態,其臨限窗受到愈細緻之劃分。此將要求程式化及讀取作業中之較高精確度以能夠達成所需解析度。As can be seen from the above description, the more the memory cells are stored, the more the threshold windows are divided. This will require a higher degree of precision in the stylization and reading operations to achieve the desired resolution.

美國專利第4,357,685號揭示程式化2態EPROM之方法,其中當將單元程式化為給定狀態時,其經受連續程式化電壓脈衝,每一次向浮動閘極添加遞增之電荷。在脈衝之間,回讀或驗證單元以判定其相對於斷點位準之源極-汲極電流。在已驗證當前狀態達到所要狀態時停止程式化。所使用之程式化脈衝串可具有遞增之週期或振幅。U.S. Patent No. 4,357,685 discloses a method of staging a 2-state EPROM in which when a unit is programmed into a given state, it undergoes a continuous staging of voltage pulses, each time adding an incremental charge to the floating gate. Between pulses, the unit is read back or verified to determine its source-drain current relative to the breakpoint level. Stylize stops when it has been verified that the current state has reached the desired state. The stylized bursts used can have an increasing period or amplitude.

先前技術之程式化電路簡單地施加程式化脈衝以自抹除態或基態逐步調試臨限窗直至達到目標狀態。實際上,為了允許足夠之解析度,每一經分割或劃分之區域將需要至少約五個程式化步驟以橫穿。效能對於2態記憶體單元而言為可接受的。然而,對於多狀態單元,所需步驟之數目隨分割之數目而增加,且因此,程式化精確度或解析度必然增加。舉例而言,16態單元可能需要平均至少40個程式化脈衝以程式化至一目標狀態。The prior art stylized circuit simply applies a stylized pulse to progressively debug the threshold window from the erase or ground state until the target state is reached. In fact, to allow for sufficient resolution, each segmented or divided region would require at least about five stylized steps to traverse. Performance is acceptable for 2-state memory cells. However, for multi-state cells, the number of steps required increases with the number of partitions, and as a result, the stylized precision or resolution necessarily increases. For example, a 16-state unit may require an average of at least 40 stylized pulses to be programmed to a target state.

圖5示意地說明具有記憶體陣列100之藉由讀取/寫入電路170經由列解碼器130及行解碼器160可存取之典型配置之記憶體裝置。如結合圖2及圖3所描述,記憶體陣列100中之記憶體單元之記憶體電晶體可經由一所選字線及位元線集合而定址。列解碼器130選擇一或多個字線且行解碼器160選擇一或多個位元線以向經定址之記憶體電晶體之各別閘極施加適當電壓。提供讀取/寫入電路170以讀取或寫入(程式化)經定址之記憶體電晶體之記憶體狀態。讀取/寫入電路170包含可經由位元線連接至陣列中之記憶體元件之許多讀取/寫入模組。FIG. 5 schematically illustrates a memory device having a typical configuration of the memory array 100 accessible by the read/write circuit 170 via the column decoder 130 and the row decoder 160. As described in connection with Figures 2 and 3, the memory transistors of the memory cells in memory array 100 can be addressed via a selected set of word lines and bit line sets. Column decoder 130 selects one or more word lines and row decoder 160 selects one or more bit lines to apply an appropriate voltage to the respective gates of the addressed memory transistors. A read/write circuit 170 is provided to read or write (program) the memory state of the addressed memory transistor. Read/write circuit 170 includes a number of read/write modules that can be connected to memory elements in the array via bit lines.

圖6A為個別讀取/寫入模組190之示意方塊圖。本質上,在讀取或驗證期間,感應放大器判定流過經由所選位元線而連接之經定址的記憶體電晶體之汲極之電流。該電流視儲存於記憶體電晶體中之電荷及其控制閘極電壓而定。舉例而言,在多狀態EEPROM單元中,可對其浮動閘極充電至若干不同位準中之一者。對於4位準單元,其可用以儲存兩個位元之資料。藉由位準位元轉換邏輯而將感應放大器所偵測之位準轉換為待儲存於資料鎖存器中之一資料位元集合。FIG. 6A is a schematic block diagram of an individual read/write module 190. Essentially, during reading or verification, the sense amplifier determines the current flowing through the drain of the addressed memory transistor connected via the selected bit line. This current depends on the charge stored in the memory transistor and its control gate voltage. For example, in a multi-state EEPROM cell, its floating gate can be charged to one of several different levels. For a 4-bit unit, it can be used to store two bits of data. The level detected by the sense amplifier is converted to a set of data bits to be stored in the data latch by the level shifting logic.

影響讀取/寫入效能及準確性之因素Factors affecting read/write performance and accuracy

為了改良讀取及程式化效能,並行地讀取或程式化陣列中之多個電荷儲存元件或記憶體電晶體。因此,一同讀取或程式化記憶體元件之邏輯"頁"。在現有記憶體架構中,一列通常含有若干交錯頁。將一同讀取或程式化一頁之所有記憶體元件。行解碼器將選擇性地將交錯頁中之每一者連接至相應數目之讀取/寫入模組。舉例而言,在一實施中,將記憶體陣列設計為具有532位元組(512位元組加上20位元組之附加項)之頁大小。若每一行含有一汲極位元線且每一列存在兩個交錯頁,則此量達到8512行,其中每一頁與4256行相關聯。將存在可連接以並行讀取或寫入所有偶數位元線或奇數位元線之4256個感應模組。以此方式,自記憶體元件之頁讀取資料之並聯的4256個位元(亦即,532位元組)之頁或將該等位元之頁程式化至記憶體元件之頁。可將形成讀取/寫入電路170之讀取/寫入模組配置為各種架構。To improve read and program performance, multiple charge storage elements or memory transistors in the array are read or programmed in parallel. Therefore, the logical "page" of the memory component is read or programmed together. In existing memory architectures, a column typically contains a number of interlaced pages. All memory elements of a page will be read or programmed together. The row decoder will selectively connect each of the interleaved pages to a corresponding number of read/write modules. For example, in one implementation, the memory array is designed to have a page size of 532 bytes (512 bytes plus an add-on of 20 bytes). If each row contains a dipole bit line and there are two interleaved pages per column, then this amount reaches 8512 rows, with each page associated with 4256 rows. There will be 4256 sensing modules that can be connected to read or write all even bit lines or odd bit lines in parallel. In this manner, a page of 4256 bits (ie, 532 bytes) of parallel data is read from the page of the memory element or the pages of the bits are programmed to the page of the memory element. The read/write modules forming the read/write circuit 170 can be configured in various architectures.

參看圖5,將讀取/寫入電路170組織化為成組的讀取/寫入堆疊180。每一讀取/寫入堆疊180為讀取/寫入模組190之堆疊。在記憶體陣列中,行間距由佔據其之一或兩個電晶體之大小而判定。然而,如自圖6A可見,讀取/寫入模組之電路將可能由多得多的電晶體及電路元件而實施且因此將佔據越過許多行之空間。為了服務於所佔據之行中的一個以上之行,將多個模組堆疊於彼此頂部上。Referring to FIG. 5, the read/write circuit 170 is organized into a set of read/write stacks 180. Each read/write stack 180 is a stack of read/write modules 190. In a memory array, the line spacing is determined by the size of one or both of the transistors. However, as can be seen from Figure 6A, the circuitry of the read/write module will likely be implemented by a much larger number of transistors and circuit elements and will therefore occupy more space than many rows. In order to serve more than one of the rows occupied, a plurality of modules are stacked on top of each other.

圖6B展示由讀取/寫入模組190之堆疊按照慣例實施之圖5之讀取/寫入堆疊。舉例而言,讀取/寫入模組可延伸越過十六行,接著可使用具有八個讀取/寫入模組之堆疊的讀取/寫入堆疊180來服務並聯之八行。可經由行解碼器而使讀取/寫入堆疊耦接至組中的八個奇數(1、3、5、7、9、11、13、15)行或八個偶數(2、4、6、8、10、12、14、16)行。FIG. 6B shows the read/write stack of FIG. 5, which is conventionally implemented by stacking of read/write modules 190. For example, the read/write module can be extended over sixteen rows, and then the stacked read/write stack 180 with eight read/write modules can be used to service eight rows in parallel. The read/write stack can be coupled to eight odd (1, 3, 5, 7, 9, 11, 13, 15) rows or eight even numbers (2, 4, 6) in the group via a row decoder , 8, 10, 12, 14, 16) lines.

如之前所提,習知記憶體裝置藉由以整體並行方式一次對所有偶數或所有奇數位元線進行操作而改良讀取/寫入作業。由兩個交錯頁組成之列之此架構將有助於減輕裝配讀取/寫入電路之區塊的問題。其亦由對控制位元線-位元線之電容耦合之考慮而規定。使用區塊解碼器以將該讀取/寫入模組集合多工至偶數頁或奇數頁。以此方式,無論何時讀取或程式化一位元線集合時,可使交錯集合接地以最小化緊鄰之耦合。As previously mentioned, conventional memory devices improve read/write operations by operating all even or all odd bit lines at once in an overall parallel manner. This architecture, consisting of two interleaved pages, will help alleviate the problem of assembling blocks of read/write circuits. It is also dictated by consideration of the capacitive coupling of the control bit line-bit line. A block decoder is used to multiplex the read/write module set to even or odd pages. In this way, whenever a set of meta-lines is read or programmed, the interlaced set can be grounded to minimize the immediate coupling.

然而,交錯頁架構在至少三個方面存在劣勢。第一,其需要額外多工電路。第二,其在效能上較慢。為了完成由字線連接之或一列中之記憶體單元之讀取或程式化,需要兩個讀取或兩個程式作業。第三,其在處理諸如浮動閘極級之鄰近的電荷儲存元件之間的場耦合(當諸如分別處於奇數與偶數頁中的兩個鄰近元件於不同時間受到程式化時)之其他干擾效應中亦非最佳。However, the interleaved page architecture has disadvantages in at least three respects. First, it requires an extra multiplexed circuit. Second, it is slower in performance. In order to complete the reading or stylization of memory cells connected by word lines or in a column, two read or two program jobs are required. Third, it deals with other interfering effects in dealing with field coupling between adjacent charge storage elements such as floating gate levels (when two adjacent elements, such as in odd and even pages, respectively, are programmed at different times) It is also not the best.

鄰近場耦合之問題隨著記憶體電晶體之間的日益緊密之間距而變得較為顯著。在記憶體電晶體中,電荷儲存元件夾於通道區域與控制閘極之間。於通道區域中流動之電流為由控制閘極與電荷儲存元件處之場起作用之合成電場之函數。隨著日益增加之密度,記憶體電晶體愈來愈緊密地形成於一起。接著來自鄰近電荷元件之場變為對受影響之單元的合成場的重要貢獻者。鄰近場視經程式化至相鄰物之電荷儲存元件之電荷而定。此擾動場本質上為動態的,因為其隨相鄰物之程式化狀態而改變。因此,可視相鄰物之改變狀態而於不同時間對受影響之單元進行不同的讀取。The problem of adjacent field coupling becomes more pronounced as the distance between the memory cells becomes increasingly tight. In a memory transistor, a charge storage element is sandwiched between the channel region and the control gate. The current flowing in the channel region is a function of the resultant electric field that acts by controlling the field at the gate and the charge storage element. With increasing densities, memory transistors are increasingly formed together. The field from the adjacent charge element then becomes an important contributor to the composite field of the affected unit. The adjacent field depends on the charge of the charge storage element that is programmed to the neighbor. This perturbation field is dynamic in nature because it changes with the stylized state of the neighbors. Therefore, the affected cells are read differently at different times depending on the changed state of the neighbors.

交錯頁之習知架構加劇由鄰近浮動閘極耦合而造成之誤差。由於偶數頁與奇數頁受到獨立於彼此之程式化及讀取,因此可視當時發生於交錯頁上之情況而在一條件集合下程式化一頁但在一完全不同之集合的條件下回讀該頁。讀取誤差隨著增加之密度將變得較為嚴重,此要求較為準確之讀取作業及對臨限窗之較粗分割以用於多狀態實施。效能將受到損害且多狀態實施中之潛在能力受到限制。The conventional architecture of interleaved pages exacerbates errors caused by the coupling of adjacent floating gates. Since the even and odd pages are stylized and read independently of each other, it is possible to program a page under a conditional set, but read back under a condition of a completely different set, depending on what happens on the interleaved page at that time. page. The read error will become more severe with increasing density, which requires a more accurate read operation and a coarser partitioning of the threshold window for multi-state implementation. Performance will suffer and the potential capabilities in multi-state implementation are limited.

美國專利公開案第US-2004-0060031-A1號揭示高效能而又緊密之非揮發性記憶體裝置,其具有讀取/寫入電路之一較大區塊以並行地讀取及寫入記憶體單元之相應區塊。詳言之,記憶體裝置具有將讀取/寫入電路之區塊中之冗餘度降低至最小之架構。藉由將讀取/寫入電路之區塊再分配至區塊讀取/寫入模組核心部分中而完成空間以及功率之顯著節省,該等核心部分在與顯著較小之組之共同部分以時間多工之方式相互作用時並行執行。詳言之,藉由共用處理器來執行複數個感應放大器與資料鎖存器之間的讀取/寫入電路中之資料處理。U.S. Patent Publication No. US-2004-0060031-A1 discloses a high-performance and compact non-volatile memory device having a larger block of read/write circuits for reading and writing memories in parallel. The corresponding block of the body unit. In particular, the memory device has an architecture that minimizes redundancy in blocks of read/write circuits. Significant space and power savings are achieved by redistributing the blocks of the read/write circuits into the core portion of the block read/write module, which are in common with the significantly smaller group Parallel execution when interacting in a time-multiplexed manner. In detail, data processing in a read/write circuit between a plurality of sense amplifiers and data latches is performed by a shared processor.

因此,存在對於高效能及高容量非揮發性記憶體之普遍需要。詳言之,存在對於具有增強之讀取及程式化效能之緊密非揮發性記憶體的需要,該記憶體具有緊密且有效,對於處理讀取/寫入電路中之資料更是高通用之改良處理器。Therefore, there is a general need for high performance and high capacity non-volatile memory. In particular, there is a need for tight non-volatile memory with enhanced read and program performance, which is compact and efficient, and is a highly versatile improvement for processing data in read/write circuits. processor.

根據本發明之一態樣,提出允許在內部記憶體處於諸如讀取、程式化或抹除之另一作業之使用中時將資料轉移入或轉移出記憶體之快取作業。詳言之,描述允許該等快取作業的資料鎖存器之配置及其使用方法。In accordance with one aspect of the present invention, a cache operation is proposed that allows data to be transferred to or from memory when the internal memory is in use in another job such as reading, programming, or erasing. In particular, the configuration of the data latches that allow such cache operations and methods of use thereof are described.

描述許多實體頁共用資料鎖存器之架構。舉例而言,讀取/寫入堆疊與記憶體之由多個字線共用之位元線相關聯。當一作業在記憶體中進行時,若此等鎖存器中之任一者空閒,則其可快取資料用於同一或另一字線中之將來的作業,節省轉移時間,因為此可藏於另一作業之後。此可藉由增加對不同作業或作業之不同階段的管線式作業之量而改良效能。在一實例中,在快取程式作業中,當程式化一頁資料時,可載入另一頁資料以節省轉移時間。對於另一實例,在一例示性實施例中,將對一字線之讀取作業***對另一字線之寫入作業中,允許由讀取所得之資料在資料寫入繼續的同時轉移出記憶體。Describes the architecture of many physical page shared data latches. For example, the read/write stack is associated with a bit line of memory that is shared by multiple word lines. When a job is in memory, if any of these latches are free, it can cache data for future jobs in the same or another word line, saving transfer time because this can Hiding behind another homework. This can improve performance by increasing the amount of pipelined work at different stages of different jobs or operations. In one example, in a cache job, when a page of data is programmed, another page of data can be loaded to save transfer time. For another example, in an exemplary embodiment, a word line read job is inserted into a write operation to another word line, allowing the read data to be transferred while the data write continues. Memory.

根據各種態樣,可在寫入或其他作業對於第一頁資料進行的同時將來自同一區塊中但不同字線上之另一頁之資料切出(toggle out)(以(例如)進行ECC作業)。對作業之此階段間管線式作業允許資料轉移所需之時間藏於對第一頁資料之作業之後。更一般地,此允許將一作業之一部分***於另一作業(通常較長)之階段之間。另一實例會將感應作業***於(如)抹除作業之階段之間,諸如在抹除脈衝之前或在用作抹除之稍後部分的軟式程式化階段之前。According to various aspects, the data from another page in the same block but on different word lines can be toggled out while writing or other jobs are being performed on the first page of data (for example, performing ECC operations) ). The time required for the pipelined job to allow data transfer during this phase of the job is hidden behind the work on the first page of data. More generally, this allows one part of a job to be inserted between the stages of another job (usually longer). Another example inserts an inductive job between, for example, stages of an erase job, such as before the erase pulse or before the soft stylization phase that is used later in the erase.

若正執行一具有不同階段之相對較長之作業,則主要態樣將藉由使用讀取/寫入堆疊之共用鎖存器(若鎖存器可用)而***較快速之作業。舉例而言,可將讀取***於程式化或抹除作業中,或者可將二進位程式化***於抹除中。主要例示性實施例將在對於一頁之程式作業期間切入及/或切出資料用於另一頁,該頁共用相同之讀取寫入堆疊,其中(例如),將對待切出並修改之資料之讀取***於資料寫入之驗證階段中。If a relatively long job with different stages is being executed, the main aspect will be inserted into the faster job by using the shared latch of the read/write stack (if the latch is available). For example, a read can be inserted into a stylized or erased job, or a binary can be stylized into the erase. The main illustrative embodiment will cut and/or cut data for another page during a program for a page, the pages sharing the same read write stack, where (for example) will be cut out and modified The reading of the data is inserted in the verification phase of the data write.

開放之資料鎖存器之可用性可以許多方式而發生。一般而言,對於每單元儲存n個位元之記憶體而言,對於每一位元線將需要n個該等資料鎖存器;然而,並非總是需要此等鎖存器之全部。舉例而言,在以上部頁/下部頁之格式儲存資料的每單元兩位元之記憶體中,在程式化下部頁時將需要一資料鎖存器(若實施快速通過寫入則使用另一鎖存器)。在程式化上部頁時將需要兩個資料鎖存器(若實施快速通過寫入則使用第三鎖存器)。更一般而言,對於儲存多個頁之記憶體而言,僅在程式化最高頁時將需要鎖存器之全部。此使得其他鎖存器可用於快取作業。此外,即使在寫入最高頁時,由於自寫入作業之驗證階段移除各種狀態,因此鎖存器將為自由的。特定言之,一旦僅剩最高狀態待驗證,則僅需單一鎖存器用於驗證之目的且其他鎖存器可用於快取作業。The availability of open data latches can occur in many ways. In general, for a memory that stores n bits per cell, n of these data latches will be required for each bit line; however, not all of these latches are always required. For example, in the memory of each unit of two bits stored in the format of the above page/lower page, a data latch will be needed when staging the lower page (if fast write is used, another data is used. Latches). Two data latches will be required to program the upper page (the third latch is used if fast writes are implemented). More generally, for memory that stores multiple pages, all of the latches will be needed only when staging the top page. This allows other latches to be used for the cache job. In addition, even when the highest page is written, the latches will be free due to the removal of various states from the verify phase of the write job. In particular, once only the highest state remains to be verified, only a single latch is needed for verification purposes and other latches are available for the cache job.

一例示性實施例基於每單元儲存兩個位元且具有針對每一位元線上之資料之兩個鎖存器及用於快速通過寫入之一額外鎖存器的四態記憶體。寫入下部頁或抹除或進行後期抹除軟式程式化之作業基本上為二進位作業且其中資料鎖存器中之一者為空閒的,可使用其來快取資料。類似地,在進行上部頁或全序列寫入時,一旦除最高級別之所有級別已經驗證,則僅單一狀態需驗證且記憶體可使一鎖存器自由,可使用該鎖存器來快取資料。如何可使用此之一實例為在(諸如於複製作業中)程式化一頁時,對共用同一集合之資料鎖存器之另一頁(諸如同一集合之位元線上之另一字線)之讀取可在程式化脈衝與寫入之驗證之間***。接著可將位址切換至正寫入之頁,允許寫入處理在其停止之處拾起而無需重新開始。在寫入繼續之同時,在***之讀取期間快取之資料可經切出、檢查或修改且轉移返回以存在用於在一旦早先寫入作業完成時即寫回。此種類之快取作業允許將對第二頁資料之切出及修改藏於對第一頁之程式化之後。An exemplary embodiment is based on two latches per cell and having two latches for data on each bit line and four state memories for quickly passing one of the additional latches. The job of writing to the lower page or erasing or performing a post-erase soft stylization is basically a binary job and one of the data latches is free, which can be used to cache data. Similarly, when performing the upper page or full sequence write, once all levels except the highest level have been verified, only a single state needs to be verified and the memory can free a latch, which can be used to cache data. How can one of the examples be used to program another page of a data latch that shares the same set (such as another word line on a bit line of the same set) when staging a page (such as in a copy job) The read can be inserted between the stylized pulse and the verification of the write. The address can then be switched to the page being written, allowing the write process to be picked up where it left off without having to start over. While the write continues, the data cached during the read of the insert can be cut, checked, or modified and transferred back for existence to be written back once the write job is completed earlier. This type of cache operation allows the cutting out and modification of the second page of data to be hidden after the stylization of the first page.

程式作業期間資料鎖存器中之快取作業Cache operation in the data latch during program operation

根據本發明之一態樣,在程式作業發生之同時,經由I/O匯流排而將用於其他未決程式作業之程式化資料載入資料鎖存器中。根據本發明之一態樣,當寫入作業之多個階段關於待追蹤之狀態之數目而變化時,階段相依之編碼致能對可用資料鎖存器之有效利用,藉此允許最大量之剩餘鎖存器用於背景快取作業。According to one aspect of the present invention, stylized data for other pending program jobs is loaded into the data latch via the I/O bus at the same time as the program job occurs. According to one aspect of the invention, the phase dependent coding enables efficient use of the available data latches as the plurality of stages of the write operation vary with respect to the number of states to be tracked, thereby allowing the maximum amount of remainder The latch is used for background cache operations.

本發明之額外特徵及優勢將自以下對於其較佳實施例之描述而被瞭解,應結合隨附圖式而進行該描述。The additional features and advantages of the present invention will be understood from the description of the preferred embodiments thereof.

圖7A示意地說明具有一組經分割之讀取/寫入堆疊之緊密記憶體裝置,其中實施了本發明之改良處理器。該記憶體裝置包括記憶體單元之二維陣列300、控制電路310及讀取/寫入電路370。記憶體陣列300可藉由字線經由列解碼器330及藉由位元線經由行解碼器360而定址。讀取/寫入電路370經實施為一組經分割之讀取/寫入堆疊400且允許一區塊(亦稱作"頁")之記憶體單元經並行地讀取或程式化。在較佳實施例中,一頁由一列鄰接記憶體單元構成。在將一列記憶體單元分割為多個區塊或頁之另一實施例中,提供區塊多工器350以將讀取/寫入電路370多工化至個別區塊。Figure 7A schematically illustrates a compact memory device having a set of segmented read/write stacks in which an improved processor of the present invention is implemented. The memory device includes a two-dimensional array 300 of memory cells, a control circuit 310, and a read/write circuit 370. Memory array 300 can be addressed by word lines via column decoder 330 and by bit lines via row decoder 360. The read/write circuit 370 is implemented as a set of partitioned read/write stacks 400 and allows a block of memory (also referred to as a "page") to be read or programmed in parallel. In the preferred embodiment, a page consists of a column of contiguous memory cells. In another embodiment of dividing a column of memory cells into a plurality of blocks or pages, a block multiplexer 350 is provided to multiplex the read/write circuit 370 to individual blocks.

控制電路310與讀取/寫入電路370合作以對記憶體陣列300執行記憶體作業。控制電路310包括狀態機312、晶片上位址解碼器314及功率控制模組316。狀態機312提供記憶體作業之晶片級控制。晶片上位址解碼器314提供一由主機或記憶體控制器所使用之位址與解碼器330及360所使用之硬體位址之間的位址介面。功率控制模組316控制在記憶體作業期間供應至字線及位元線之功率及電壓。Control circuit 310 cooperates with read/write circuit 370 to perform a memory job on memory array 300. Control circuit 310 includes state machine 312, on-chip address decoder 314, and power control module 316. State machine 312 provides wafer level control of the memory job. The on-chip address decoder 314 provides an address interface between the address used by the host or memory controller and the hardware address used by the decoders 330 and 360. Power control module 316 controls the power and voltage supplied to the word lines and bit lines during memory operation.

圖7B說明圖7A所示之緊密記憶體裝置之較佳配置。以在陣列之相對兩側上對稱之方式實施各種周邊電路對記憶體陣列300之存取,以使得每側上之存取線及電路減半。因此,將列解碼器拆分為列解碼器330A及330B且將行解碼器拆分為行解碼器360A及360B。在將一列記憶體單元分割為多個區塊之實施例中,將區塊多工器350拆分為區塊多工器350A及350B。類似地,將讀取/寫入電路拆分為自陣列300之底部連接至位元線之讀取/寫入電路370A及自陣列300之頂部連接至位元線之讀取/寫入電路370B。以此方式,讀取/寫入模組之密度及因此經分割之讀取/寫入堆疊400之密度本質上減半。Figure 7B illustrates a preferred configuration of the compact memory device shown in Figure 7A. Access to the memory array 300 by various peripheral circuits is performed in a manner that is symmetrical on opposite sides of the array such that the access lines and circuitry on each side are halved. Therefore, the column decoder is split into column decoders 330A and 330B and the row decoder is split into row decoders 360A and 360B. In an embodiment in which a column of memory cells is divided into a plurality of blocks, the block multiplexer 350 is split into block multiplexers 350A and 350B. Similarly, the read/write circuit is split into a read/write circuit 370A connected to the bit line from the bottom of the array 300 and a read/write circuit 370B connected from the top of the array 300 to the bit line. . In this manner, the density of the read/write modules and thus the density of the segmented read/write stack 400 is substantially halved.

圖8示意地說明圖7A所示之讀取/寫入堆疊中之基本組件的一般配置。根據本發明之一般架構,讀取/寫入堆疊400包含用於感應k個位元線之感應放大器堆疊212、用於資料經由I/O匯流排231之輸入或輸出的I/O模組440、用於儲存輸入或輸出資料之資料鎖存器堆疊430、用以處理及儲存讀取/寫入堆疊400中之資料的通用處理器500及用於堆疊組件中之通信之堆疊匯流排421。讀取/寫入電路370中之堆疊匯流排控制器經由線411提供控制及定時信號以控制讀取/寫入堆疊中之各種組件。Figure 8 schematically illustrates the general configuration of the basic components in the read/write stack shown in Figure 7A. In accordance with the general architecture of the present invention, the read/write stack 400 includes a sense amplifier stack 212 for sensing k bit lines, and an I/O module 440 for input or output of data via the I/O bus 231. A data latch stack 430 for storing input or output data, a general purpose processor 500 for processing and storing data in the read/write stack 400, and a stack bus 421 for communicating in the stacked assembly. The stack bus controller in read/write circuit 370 provides control and timing signals via line 411 to control the various components in the read/write stack.

圖9說明圖7A及圖7B所示之讀取/寫入電路中之讀取/寫入堆疊的一較佳配置。每一讀取/寫入堆疊400對k個位元線之組並行地進行作業。若一頁具有p=r*k個位元線,則將存在r個讀取/寫入堆疊400-1,......,400-r。Figure 9 illustrates a preferred configuration of a read/write stack in the read/write circuit of Figures 7A and 7B. Each read/write stack 400 operates in parallel on a set of k bit lines. If a page has p = r * k bit lines, there will be r read/write stacks 400-1, ..., 400-r.

並行操作的經分割之讀取/寫入堆疊400之整組允許並行地讀取或程式化沿一列之p個單元之區塊(或頁)。因此,將存在p個讀取/寫入模組用於整列單元。由於每一堆疊服務於k個記憶體單元,因此藉由r=p/k而給出組中之讀取/寫入堆疊之總數。舉例而言,若r為組中之堆疊之數目,則p=r*k。一實例記憶體陣列可具有p=512位元組(512x8位元),k=8且因此r=512。在較佳實施例中,區塊為整列單元之游程。在另一實施例中,區塊為列中之單元的子集。舉例而言,單元之子集可為整列之一半或整列之四分之一。單元之子集可為一游程之鄰接單元或每隔一個之單元或每隔預定數目個之單元。The entire set of split read/write stacks 400 operating in parallel allows blocks (or pages) of p cells along a column to be read or programmed in parallel. Therefore, there will be p read/write modules for the entire column of cells. Since each stack serves k memory cells, the total number of read/write stacks in the group is given by r=p/k. For example, if r is the number of stacks in the group, then p=r*k. An example memory array can have p = 512 bytes (512 x 8 bits), k = 8 and thus r = 512. In the preferred embodiment, the block is a run of an entire column of cells. In another embodiment, the block is a subset of the cells in the column. For example, a subset of cells can be one-half of an entire column or a quarter of an entire column. A subset of the units may be adjacent units of a run or every other unit or every predetermined number of units.

諸如400-1之每一讀取/寫入堆疊本質上含有並行地服務於k個記憶體單元之區段的感應放大器212-1至212-k之堆疊。美國專利公開案第2004-0109357-A1號中揭示較佳感應放大器,該公開案之全部揭示內容以引用的方式併入本文中。Each read/write stack, such as 400-1, essentially contains a stack of sense amplifiers 212-1 through 212-k that serve segments of k memory cells in parallel. A preferred inductive amplifier is disclosed in U.S. Patent Publication No. 2004-0109357-A1, the entire disclosure of which is incorporated herein by reference.

堆疊匯流排控制器410經由線411向讀取/寫入電路370提供控制及定時信號。堆疊匯流排控制器自身經由線311視記憶體控制器310而定。藉由互連堆疊匯流排431而實現每一讀取/寫入堆疊400中之通信且藉由堆疊匯流排控制器410而控制該通信。控制線411自堆疊匯流排控制器410向讀取/寫入堆疊400-1之組件提供控制及時脈信號。Stack bus controller 410 provides control and timing signals to read/write circuit 370 via line 411. The stack bus controller itself is dependent on the memory controller 310 via line 311. The communication in each read/write stack 400 is achieved by interconnecting the stack bus 431 and is controlled by the stack bus controller 410. Control line 411 provides control timing signals from stack bus controller 410 to components of read/write stack 400-1.

在較佳配置中,將堆疊匯流排分割為用於通用處理器500與感應放大器之堆疊212之間的通信之SABus 422及用於處理器與資料鎖存器之堆疊430之間的通信之DBus 423。In a preferred configuration, the stack bus is divided into SABus 422 for communication between the general purpose processor 500 and the stack 212 of sense amplifiers and DBus for communication between the processor and the stack 430 of data buffers. 423.

資料鎖存器之堆疊430包含資料鎖存器430-1至430-k,對於與堆疊相關聯之每一記憶體單元存在一者。I/O模組440使得資料鎖存器能夠經由I/O匯流排231與外部交換資料。The stack of data latches 430 includes data latches 430-1 through 430-k, one for each memory cell associated with the stack. The I/O module 440 enables the data latches to exchange data with the outside via the I/O bus 231.

通用處理器亦包括用於輸出指示記憶體作業之狀態(諸如誤差狀況)的狀態信號之輸出507。狀態信號係用以驅動Wired-Or組態中系於旗標匯流排509的n型電晶體550之閘極。旗標匯流排較佳地藉由控制器310而預充電且將在讀取/寫入堆疊中之任一者確定狀態信號時經下拉。The general purpose processor also includes an output 507 for outputting a status signal indicative of the status of the memory job, such as an error condition. The status signal is used to drive the gate of the n-type transistor 550 tied to the flag bus 509 in the Wired-Or configuration. The flag bus is preferably pre-charged by controller 310 and will be pulled down when any of the read/write stacks determine the status signal.

圖10說明圖9所示之通用處理器之改良實施例。通用處理器500包含用於與外部電路通信之處理器匯流排PBUS 505、輸入邏輯510、處理器鎖存器PLatch 520及輸出邏輯530。Figure 10 illustrates a modified embodiment of the general purpose processor illustrated in Figure 9. General purpose processor 500 includes a processor bus PBUS 505, input logic 510, processor latch PLatch 520, and output logic 530 for communicating with external circuitry.

輸入邏輯510自PBUS接收資料且作為視經由信號線411來自堆疊匯流排控制器410之控制信號而處於邏輯狀態"1"、"0"或"Z"(浮動)中之一者的轉換資料輸出至BSI節點。設定/重設鎖存器PLatch 520接著鎖存BSI,導致如MTCH及MTCH 之一對補充輸出信號。The input logic 510 receives data from the PBUS and is converted to a data output of one of logic states "1", "0", or "Z" (floating) as a control signal from the stack bus controller 410 via the signal line 411. To the BSI node. The set/reset latch PLatch 520 then latches the BSI, resulting in a complementary output signal as one of MTCH and MTCH * .

輸出邏輯530接收MTCH及MTCH 信號且在PBUS 505上輸出視經由信號線411來自堆疊匯流排控制器410之控制信號而處於邏輯狀態"1"、"0"或"Z"(浮動)中之一者的轉換資料。The output logic 530 receives the MTCH and MTCH * signals and outputs on the PBUS 505 in a logic state "1", "0" or "Z" (floating) depending on the control signal from the stack bus controller 410 via the signal line 411. Conversion data for one.

在任一時刻,通用處理器500處理與給定記憶體單元相關之資料。舉例而言,圖10說明記憶體單元耦接至位元線1之情形。相應感應放大器212-1包含感應放大器資料出現之節點。在較佳實施例中,節點採取儲存資料之SA鎖存器214-1之形式。類似地,資料鎖存器之相應集合430-1儲存與耦接至位元線1之記憶體單元相關聯之輸入或輸出資料。在較佳實施例中,資料鎖存器之集合430-1包含充足資料鎖存器434-1,......,434-n以儲存n位元之資料。At any one time, general purpose processor 500 processes the data associated with a given memory unit. For example, FIG. 10 illustrates the case where the memory cell is coupled to bit line 1. The corresponding sense amplifier 212-1 contains the nodes where the sense amplifier data appears. In the preferred embodiment, the node takes the form of a SA latch 214-1 that stores data. Similarly, a corresponding set 430-1 of data latches stores input or output data associated with memory cells coupled to bit line 1. In the preferred embodiment, the set of material latches 430-1 includes sufficient data latches 434-1, ..., 434-n to store n bits of data.

當藉由一對補充信號SAP及SAN而致能轉移閘極501時,通用處理器500之PBUS 505經由SBUS 422可接近SA鎖存器214-1。類似地,當藉由一對補充信號DTP及DTN而致能轉移閘極502時,PBUS 505經由DBUS 423可接近資料鎖存器之集合430-1。明確地將信號SAP、SAN、DTP及DTN說明為來自堆疊匯流排控制器410之控制信號之部分。When the transfer gate 501 is enabled by a pair of supplemental signals SAP and SAN, the PBUS 505 of the general purpose processor 500 is accessible to the SA latch 214-1 via the SBUS 422. Similarly, when the transfer gate 502 is enabled by a pair of supplemental signals DTP and DTN, the PBUS 505 can access the set 430-1 of data latches via the DBUS 423. The signals SAP, SAN, DTP, and DTN are explicitly illustrated as part of the control signals from the stacked bus controller 410.

圖11A說明圖10所示之通用處理器之輸入邏輯的較佳實施例。輸入邏輯520在PBUS 505上接收資料且視控制信號而使得輸出BSI為相同、反相或浮動的。輸出BSI節點本質上受轉移閘極522或包含串聯至Vdd之p型電晶體524及525的上拉電路,或者包含串聯接地之n型電晶體526及527的下拉電路之輸出的影響。上拉電路具有至p型電晶體524及525之閘極,其分別由信號PBUS及ONE控制。下拉電路具有至n型電晶體526及527之閘極,其分別由信號ONEB<1>及PBUS控制。Figure 11A illustrates a preferred embodiment of the input logic of the general purpose processor shown in Figure 10. Input logic 520 receives the data on PBUS 505 and causes the output BSI to be the same, inverted, or floating depending on the control signal. The output BSI node is essentially affected by the transfer gate 522 or a pull-up circuit comprising p-type transistors 524 and 525 connected in series to Vdd, or the output of a pull-down circuit comprising n-type transistors 526 and 527 connected in series. The pull up circuit has gates to p-type transistors 524 and 525, which are controlled by signals PBUS and ONE, respectively. The pull-down circuit has gates to n-type transistors 526 and 527, which are controlled by signals ONEB<1> and PBUS, respectively.

圖11B說明圖11A之輸入邏輯之真值表。由PBUS及係來自堆疊匯流排控制器410之控制信號之部分的控制信號ONE、ONEB<0>、ONEB<1>控制邏輯。本質上,支援三個轉移模式:通過、反相及浮動。Figure 11B illustrates a truth table for the input logic of Figure 11A. The control signals ONE, ONEB<0>, ONEB<1> control logic are derived from the PBUS and the portion of the control signal from the stack bus controller 410. Essentially, three transfer modes are supported: pass, invert, and float.

在BSI與輸入資料相同之通過模式之情形下,信號ONE處於邏輯"1",ONEB<0>處於"0"且ONEB<1>處於"0"。此將去能上拉或下拉但將使得轉移閘極522能夠在PBUS 505上將資料傳遞至輸出523。在BSI為輸入資料之反相之反相模式的情形下,信號ONE處於"0",ONEB<0>處於"1"且ONE<1>處於"1"。此將去能轉移閘極522。又,當PBUS處於"0"時,將去能下拉電路而致能上拉電路,此導致BSI處於"1"。類似地,當PBUS處於"1"時,將去能上拉電路而致能下拉電路,此導致BSI處於"0"。最後,在浮動模式之情形下,可藉由使得信號ONE處於"1",ONEB<0>處於"1"且ONEB<1>處於"0"而使輸出BSI浮動。為了完整性而列出浮動模式,但在實務上不使用該模式。In the case where the BSI is the same pass mode as the input data, the signal ONE is at logic "1", ONEB<0> is at "0" and ONEB<1> is at "0". This will either pull up or pull down but will enable the transfer gate 522 to pass data to the output 523 on the PBUS 505. In the case where the BSI is the inverted mode of the input data, the signal ONE is at "0", the ONEB<0> is at "1" and the ONE<1> is at "1". This will go to the transfer gate 522. Also, when PBUS is at "0", the pull-down circuit is enabled to enable the pull-up circuit, which causes the BSI to be "1". Similarly, when PBUS is at "1", the pull-up circuit will be deactivated to enable the pull-down circuit, which causes BSI to be at "0". Finally, in the case of the floating mode, the output BSI can be floated by making the signal ONE "1", ONEB<0> at "1" and ONEB<1> at "0". Floating mode is listed for completeness, but it is not used in practice.

圖12A說明圖10所示之通用處理器之輸出邏輯的較佳實施例。在處理器鎖存器PLatch 520中鎖存BSI節點處來自輸入邏輯520之信號。輸出邏輯530自PLatch 520之輸出接收資料MTCH及MTCH 且視控制信號而在PBUS上以通過、反相或浮動模式輸出。換言之,四個分支用作PBUS 505之驅動器,主動將其拉至高、低或浮動狀態。此藉由PBUS 505之四個分支電路(即兩個上拉電路及兩個下拉電路)而完成。第一上拉電路包含串聯至Vdd之p型電晶體531及532,且能夠在MTCH處於"0"時上拉PBUS。第二上拉電路包含串聯接地之p型電晶體533及534,且能夠在MTCH處於"1"時上拉PBUS。類似地,第一下拉電路包含串聯至Vdd之n型電晶體535及536,且能夠在MTCH處於"0"時下拉PBUS。第二上拉電路包含串聯接地之n型電晶體537及538,且能夠在MTCH處於"1"時上拉PBUS。Figure 12A illustrates a preferred embodiment of the output logic of the general purpose processor illustrated in Figure 10. The signal from input logic 520 at the BSI node is latched in processor latch PLatch 520. Output logic 530 receives data MTCH and MTCH * from the output of PLatch 520 and outputs it in pass, invert or floating mode on PBUS depending on the control signal. In other words, the four branches act as drivers for the PBUS 505, actively pulling them to high, low, or floating states. This is done by the four branch circuits of the PBUS 505 (ie, two pull-up circuits and two pull-down circuits). The first pull-up circuit includes p-type transistors 531 and 532 connected in series to Vdd, and can pull up PBUS when MTCH is at "0". The second pull-up circuit includes p-type transistors 533 and 534 that are grounded in series and is capable of pulling up PBUS when the MTCH is at "1". Similarly, the first pull-down circuit includes n-type transistors 535 and 536 connected in series to Vdd, and is capable of pulling down PBUS when the MTCH is at "0". The second pull-up circuit includes n-type transistors 537 and 538 that are grounded in series and is capable of pulling up PBUS when the MTCH is at "1".

本發明之一特徵為以PMOS電晶體構成上拉電路且以NMOS電晶體構成下拉電路。由於藉由NMOS之拉動遠強於PMOS之拉動,因此在任何競爭中下拉將總是勝過上拉。換言之,節點或匯流排可總是預設為上拉或"1"狀態,且必要時可總藉由下拉而倒轉為"0"狀態。One feature of the present invention is that a pull-up circuit is formed by a PMOS transistor and a pull-down circuit is formed by an NMOS transistor. Since the pulling of the NMOS is much stronger than the pulling of the PMOS, the pull-down will always outweigh the pull-up in any competition. In other words, the node or bus bar can always be preset to a pull-up or "1" state, and can always be reversed to a "0" state by pull-down if necessary.

圖12B說明圖12A之輸出邏輯之真值表。藉由自輸入邏輯鎖存之MTCH、MTCH 及係來自堆疊匯流排控制器410之控制信號之部分的控制信號PDIR、PINV、NDIR、NINV而控制邏輯。支援四個作業模式:通過、反相、浮動及預充電。Figure 12B illustrates a truth table for the output logic of Figure 12A. The logic is controlled by the MTCH, MTCH * latched from the input logic and the control signals PDIR, PINV, NDIR, NINV from portions of the control signals of the stacked bus controller 410. Four operating modes are supported: pass, invert, float, and precharge.

在浮動模式中,去能所有四個分支。此藉由使信號PINV=1、NINV=0、PDIR=1、NDIR=0(此亦為預設值)而完成。在通過模式中,當MTCH=0時,其將要求PBUS=0。此藉由僅致能具有n型電晶體535及536之下拉分支(其中所有控制信號處於其預設值,除了NDIR=1)而完成。當MTCH=1時,其將要求PBUS=1。此藉由僅致能具有p型電晶體533及534之上拉分支(其中所有控制信號處於其預設值,除了PINV=0)而完成。在反相模式中,當MTCH=0時,其將要求PBUS=1。此藉由僅致能具有p型電晶體531及532之上拉分支(其中所有控制信號處於其預設值,除了PDIR=0)而完成。當MTCH=1時,其將要求PBUS=0。此藉由僅致能具有n型電晶體537及538之下拉分支(其中所有控制信號處於其預設值,除了NINV=1)而完成。在預充電模式中,PDIR=0及PINV=0之控制信號設定將在MTCH=1時致能具有p型電晶體531及531之上拉分支或在MTCH=0時致能具有p型電晶體533及534之上拉分支。In floating mode, go to all four branches. This is done by making the signals PINV=1, NINV=0, PDIR=1, NDIR=0 (this is also a preset value). In pass mode, when MTCH=0, it will require PBUS=0. This is accomplished by enabling only n-type transistors 535 and 536 to pull down the branch (where all control signals are at their preset values, except for NDIR = 1). When MTCH = 1, it will require PBUS = 1. This is accomplished by having only the pull-up branches of p-type transistors 533 and 534 (where all control signals are at their preset values, except PINV = 0). In the inverting mode, when MTCH=0, it will require PBUS=1. This is accomplished by enabling only the pull-up branches of p-type transistors 531 and 532 (where all control signals are at their preset values except PDIR = 0). When MTCH = 1, it will require PBUS = 0. This is accomplished by enabling only n-type transistors 537 and 538 to pull down the branch (where all control signals are at their preset values, except NINV = 1). In the precharge mode, the control signal setting of PDIR=0 and PINV=0 will enable the p-type transistor 531 and 531 pull-up branches when MTCH=1 or the p-type transistor when MTCH=0. Pull the branches above 533 and 534.

通用處理器作業在2004年12月29日之美國專利申請案號11/026,536中揭露地更為充分,該申請案之全文以引用的方式併入本文中。A general-purpose processor operation is more fully disclosed in U.S. Patent Application Serial No. 11/026,536, the entire disclosure of which is incorporated herein by reference.

資料鎖存器在快取作業中之使用Data latch use in cache operations

本發明之許多態樣利用上文於圖10中描述之讀取/寫入堆疊之資料鎖存器用於快取作業,該等作業將在內部記憶體進行諸如讀取、寫入或抹除之其他作業的同時輸入及輸出資料。在上文所述之架構中,許多實體頁共用資料鎖存器。舉例而言,由於處於由字線之全部所共用之位元線的讀取/寫入堆疊上,因此當一作業進行時,若此等鎖存器中之任一者空閒,則其可快取資料用於同一或另一字線中之將來的作業,節省轉移時間(因為此可隱藏於另一作業後)。此可藉由增加對不同作業或作業之不同階段的管線式作業之量而改良效能。在一實例中,在快取程式作業中,當程式化一頁資料時,可載入另一頁資料以節省轉移時間。對於另一實例,在一例示性實施例中,將對一字線之讀取作業***對另一字線之寫入作業中,允許由讀取所得之資料在資料寫入繼續的同時轉移出記憶體。Many aspects of the present invention utilize the read/write stacked data latches described above in FIG. 10 for cache operations that will be performed in internal memory such as read, write or erase. Input and output data at the same time as other jobs. In the architecture described above, many physical pages share a data latch. For example, since it is on the read/write stack of bit lines shared by all of the word lines, it can be fast if any of these latches are idle while a job is in progress Taking data for future jobs in the same or another word line saves transfer time (because this can be hidden behind another job). This can improve performance by increasing the amount of pipelined work at different stages of different jobs or operations. In one example, in a cache job, when a page of data is programmed, another page of data can be loaded to save transfer time. For another example, in an exemplary embodiment, a word line read job is inserted into a write operation to another word line, allowing the read data to be transferred while the data write continues. Memory.

注意,此允許在寫入或其他作業對於第一頁資料進行的同時將來自同一區塊中但不同字線上之另一頁之資料切出(以(例如)進行ECC作業)。對作業之此階段間管線式作業允許資料轉移所需之時間藏於對第一頁資料之作業之後。更一般地,此允許將一作業之一部分***於另一作業(通常較長)之階段之間。另一實例會將感應作業***於(如)抹除作業之階段之間,諸如在抹除脈衝之前或在用作抹除之稍後部分的軟式程式化階段之前。Note that this allows data from another page in the same block but on a different word line to be cut out (for example, to perform an ECC job) while a write or other job is being performed on the first page of material. The time required for the pipelined job to allow data transfer during this phase of the job is hidden behind the work on the first page of data. More generally, this allows one part of a job to be inserted between the stages of another job (usually longer). Another example inserts an inductive job between, for example, stages of an erase job, such as before the erase pulse or before the soft stylization phase that is used later in the erase.

為了論述作業中之一些所需之相對時間,可將用於上文所述之系統之一例示性時間值集合取為:資料寫入:~700 μs(下部頁~600 μs,上部頁800 μs)二進位資料寫入:~200 μs抹除:~2,500 μs讀取:~20-40 μs讀取及切出資料:2 KB資料,~80 μs;4 KB~160 μs;8 KB~320 μsTo account for the relative time required for some of the operations, an exemplary set of time values for the system described above can be taken as: data write: ~700 μs (lower page ~600 μs, upper page 800 μs) Binary data write: ~200 μs erase: ~2,500 μs read: ~20-40 μs read and cut data: 2 KB data, ~80 μs; 4 KB~160 μs; 8 KB~320 μs

此等值可用於參考以給出對下文之時序圖所涉及之相對時間的概念。若具有一具有不同階段之較長作業,則主要態樣將藉由使用讀取/寫入堆疊之共用鎖存器(若鎖存器可用)而***較快速之作業。舉例而言,可將讀取***於程式化或抹除作業中,或者可將二進位程式化***於抹除中。主要例示性實施例將在對於一頁之程式作業期間切入及/或切出資料用於另一頁,該頁共用相同之讀取寫入堆疊,其中(例如)將對待切出並修改之資料之讀取***於資料寫入之驗證階段中。These values can be used as a reference to give a concept of the relative time involved in the timing diagram below. If there is a longer job with a different stage, the main aspect will be inserted into the faster job by using the shared latch of the read/write stack (if the latch is available). For example, a read can be inserted into a stylized or erased job, or a binary can be stylized into the erase. The primary illustrative embodiment will cut and/or cut data for another page during a program for a page, the pages sharing the same read write stack, where, for example, the data to be cut and modified will be The read is inserted into the verification phase of the data write.

開放之資料鎖存器之可用性可以許多方式而發生。一般而言,對於每單元儲存n個位元之記憶體而言,對於每一位元線將需要n個該等資料鎖存器;然而,並非總是需要此等鎖存器之全部。舉例而言,在以上部頁/下部頁之格式儲存資料的每單元兩位元之記憶體中,在程式化下部頁時將需要兩個資料鎖存器。更一般地,對於儲存多個頁之記憶體而言,僅在程式化最高頁時將需要鎖存器之全部。此使得其他鎖存器可用於快取作業。此外,即使在寫入最高頁時,由於自寫入作業之驗證階段移除各種狀態,因此鎖存器將為自由的。特定言之,一旦僅剩最高狀態待驗證,則僅需單一鎖存器用於驗證之目的且其他鎖存器可用於快取作業。The availability of open data latches can occur in many ways. In general, for a memory that stores n bits per cell, n of these data latches will be required for each bit line; however, not all of these latches are always required. For example, in the memory of each unit of two bits stored in the format of the above page/lower page, two data latches will be required to program the lower page. More generally, for memory that stores multiple pages, all of the latches will be needed only when staging the top page. This allows other latches to be used for the cache job. In addition, even when the highest page is written, the latches will be free due to the removal of various states from the verify phase of the write job. In particular, once only the highest state remains to be verified, only a single latch is needed for verification purposes and other latches are available for the cache job.

以下論述將基於如併入於前文中之題為"Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories"之美國專利申請案中所描述的每單元儲存兩個位元且具有針對每一位元線上之資料之兩個鎖存器及用於快速通過寫入之一額外鎖存器的四態記憶體,該申請案與本申請案同時申請。寫入下部頁或抹除或進行後期抹除軟式程式化之作業基本上為二進位作業且其中資料鎖存器中之一者為空閒的,可使用其來快取資料。類似地,在進行上部頁或全序列寫入時,一旦除最高級別之所有級別已經驗證,則僅單一狀態需驗證且記憶體可使一鎖存器自由,可使用該鎖存器來快取資料。如何可使用此之一實例為在(諸如於複製作業中)程式化一頁時,對共用同一資料鎖存器集合之另一頁(諸如同一位元線集合上之另一字線)之讀取可在寫入之驗證階段期間***。接著可將位址切換至正寫入之頁,允許寫入處理在其停止之處拾起而無需重新開始。在寫入繼續之同時,在***之讀取期間快取之資料可經切出、檢查或修改且轉移返回以存在用於在一旦早先寫入作業完成時即寫回。此種類之快取作業允許將對第二頁資料之切出及修改藏於對第一頁之程式化之後。The following discussion will be based on the storage of two bits per cell as described in the U.S. Patent Application entitled "Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories", which is incorporated herein by reference. The two latches of the data on the bit line and the four-state memory for quickly passing one of the additional latches are applied at the same time as the present application. The job of writing to the lower page or erasing or performing a post-erase soft stylization is basically a binary job and one of the data latches is free, which can be used to cache data. Similarly, when performing the upper page or full sequence write, once all levels except the highest level have been verified, only a single state needs to be verified and the memory can free a latch, which can be used to cache data. How can one of the examples be used to read another page that shares the same set of data latches (such as another word line on the same set of bit lines) when staging a page (such as in a copy job) The fetch can be inserted during the verification phase of the write. The address can then be switched to the page being written, allowing the write process to be picked up where it left off without having to start over. While the write continues, the data cached during the read of the insert can be cut, checked, or modified and transferred back for existence to be written back once the write job is completed earlier. This type of cache operation allows the cutting out and modification of the second page of data to be hidden after the stylization of the first page.

作為第一實例,用於二位元記憶體之快取程式作業以單頁(下部頁/上部頁之格式)程式化模式而操作。圖13為圖10之簡化版本,其展示在一二位元實施例中與當前論述相關之一些特定元件,去除其他元件以簡化論述。此等包括連接資料I/O線231之資料鎖存器DL0 434-0、藉由線423而連接至通用處理器500之資料鎖存器DL1 434-1、藉由線435而與其他資料鎖存器共同地連接之資料鎖存器DL2 434-2以及藉由線422而連接至通用處理器500之感應放大器資料鎖存器DLS 214。圖13之各種元件根據其在對下部頁之程式化期間之部署而被標記。如題為"Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories"的與本申請案同時申請之美國專利申請案中所描述,鎖存器DL2 434-2用於快速通過寫入模式中之下部驗證(VL);對暫存器之包括以及在包括暫存器時對使用快速通過寫入之包括為可選的,但例示性實施例將包括此暫存器。As a first example, a cache program for a two-bit memory operates in a single page (lower page/upper page format) stylized mode. 13 is a simplified version of FIG. 10 showing some of the specific components associated with the present discussion in a two-bit embodiment, with other components removed to simplify the discussion. These include a data latch DL0 434-0 connected to the data I/O line 231, connected to the data latch DL1 434-1 of the general purpose processor 500 via line 423, and other data locks via line 435. The data latch DL2 434-2, which is commonly connected to the memory, is coupled to the sense amplifier data latch DLS 214 of the general purpose processor 500 via line 422. The various components of Figure 13 are labeled according to their deployment during stylization of the lower page. The latch DL2 434-2 is used in the fast pass write mode as described in the U.S. Patent Application Serial No. 5, the entire entire entire filing------- The lower verification (VL); the inclusion of the scratchpad and the inclusion of the fast pass write when the scratchpad is included is optional, but the illustrative embodiment will include this register.

對下部頁之程式化可包括以下步驟:(1)處理由將資料鎖存器DL0 434-0重設為預設值"1"而開始。此慣例係用以簡化部分頁之程式化,因為將抑制對所選列中不待程式化之單元進行程式化。Styling the lower page may include the following steps: (1) Processing begins by resetting the data latch DL0 434-0 to a preset value of "1". This convention is used to simplify the stylization of partial pages because it will suppress the stylization of cells in the selected column that are not to be programmed.

(2)沿I/O線231將程式化資料供應至DL0 434-0。(2) The stylized data is supplied to the DL0 434-0 along the I/O line 231.

(3)程式化資料將被轉移至DL1 434-1及DL2 434-2(若包括此鎖存器且實施快速通過寫入)。(3) The stylized data will be transferred to DL1 434-1 and DL2 434-2 (if this latch is included and fast pass write is implemented).

(4)一旦將程式化資料轉移至DL1 434-1,即可將資料鎖存器DL0 434-0重設為"1"且在程式化時間期間,可沿I/O線231將下一資料頁載入DL0 434-0,此允許在寫入第一頁之同時對第二頁之快取。(4) Once the stylized data is transferred to DL1 434-1, the data latch DL0 434-0 can be reset to "1" and the next data can be placed along the I/O line 231 during the stylized time. The page loads DL0 434-0, which allows the second page to be cached while the first page is being written.

(5)一旦將第一頁載入DL1 434-1,程式化即可開始。使用DL1 434-1資料以將單元自進一步程式化封鎖。如題為"Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories"的與本申請案同時申請之美國專利申請案中所描述,DL2 434-2資料用於管理向快速通過寫入之第二階段之轉變的下部驗證封鎖。(5) Once the first page is loaded into DL1 434-1, the stylization can begin. Use DL1 434-1 data to further block the unit from further programming. The DL2 434-2 data is used to manage the second pass to fast pass, as described in the U.S. Patent Application entitled "Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories". The lower part of the phase transition is verified by the blockade.

(6)一旦程式化開始,在一程式化脈衝之後,下部驗證之結果即用以更新DL2 434-2;較高驗證之結果用以更新DL1 434-1。(此論述係基於"習知"編碼,其中下部頁程式化將達到A狀態。題為"Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories"的與本申請案同時申請之美國專利申請案及於2005年3月16日申請之題為"Non-Volatile Memory and Method with Power-Saving Read and Program-Verify Operations"之美國專利申請案進一步論述了此及其他編碼。當前論述向其他編碼之擴展易於隨後產生)。(6) Once the stylization begins, after a stylized pulse, the result of the lower verification is used to update DL2 434-2; the result of the higher verification is used to update DL1 434-1. (This discussion is based on the "known" code, in which the lower page stylization will reach the A state. A US patent application filed concurrently with the present application entitled "Use of Data Latches in Multi-Phase Programming of Non-Volatile Memories" This and other encodings are further discussed in the U.S. Patent Application Serial No. 5, the entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire The extension is easy to generate later).

(7)在判定程式化是否完成之過程中,僅檢查列之單元之DL1 434-1暫存器(或程式化之適當實體單位)。(7) In the process of determining whether the stylization is completed, only the DL1 434-1 register of the listed unit (or the appropriate entity unit of stylization) is checked.

一旦寫入下部頁,則可對上部頁進行程式化。圖14展示與圖13相同之元件,但指示對於上部頁程式化之鎖存器分配,在其中讀入下部頁資料。(該描述再次使用習知編碼,使得上部頁之程式化將達到B及C狀態)。對上部頁之程式化可包括以下步驟:(1)一旦下部頁結束程式化,即以來自狀態機控制器之信號而開始上部頁(或下一頁)寫入,其中(未執行之)快取程式化指令得以保存。Once the lower page is written, the upper page can be programmed. Figure 14 shows the same components as Figure 13, but indicating the latch assignment for the upper page stylized, in which the lower page data is read. (This description uses conventional encoding again so that the stylization of the upper page will reach the B and C states). The stylization of the upper page may include the following steps: (1) Once the lower page ends programming, the upper page (or next page) is written with a signal from the state machine controller, where (unexecuted) is fast The stylized instructions are saved.

(2)程式化資料將被自DL0 434-0(在步驟(3)中下部頁寫入期間將資料載入DL0 434-0)轉移至DL1 434-1及DL2 434-2。(2) The stylized data will be transferred from DL0 434-0 (loading data to DL0 434-0 during the lower page write in step (3)) to DL1 434-1 and DL2 434-2.

(3)將自陣列讀入下部頁資料且將其置放於DL0 434-0中。(3) Read the next page data from the array and place it in DL0 434-0.

(4)DL1 434-1及DL2 434-2再次分別用於驗證高及驗證低封鎖資料。鎖存器DL0 434-0(保持下部頁資料)作為程式化參考資料而經檢查,但並不以驗證結果對其加以更新。(4) DL1 434-1 and DL2 434-2 are again used to verify high and verify low blocking data, respectively. The latch DL0 434-0 (keeping the lower page data) is checked as a stylized reference but is not updated with the verification result.

(5)作為驗證B狀態之部分,在於下部驗證VBL處感應之後,將於DL2 434-2中相應地更新資料,同時藉由高驗證VBH結果而更新DL1 434-1資料。類似地,C驗證將具有相應指令以藉由各別VCL及VCH結果來更新鎖存器DL2 434-2及DL1 434-1。(5) As part of verifying the B state, after sensing at the lower verification VBL, the data will be updated accordingly in DL2 434-2, while the DL1 434-1 data is updated by high verifying the VBH result. Similarly, C verification will have corresponding instructions to update latches DL2 434-2 and DL1 434-1 by respective VCL and VCH results.

(6)一旦B資料完成,則不需要下部頁資料(經保持於DL0 434-0中用於參考),因為僅需執行對C狀態之驗證。將DL0 434-0重設為"1"且可自I/O線231載入另一頁之程式化資料且於鎖存器DL0 434-0中對其進行快取。通用處理器500可設定僅C狀態待驗證之指示。(6) Once the B data is completed, the lower page data (remained in DL0 434-0 for reference) is not required because only the verification of the C state is performed. The DL0 434-0 is reset to "1" and the stylized data of another page can be loaded from the I/O line 231 and cached in the latch DL0 434-0. The general purpose processor 500 can set an indication that only the C state is to be verified.

(7)在判定上部頁程式化是否完成之過程中,對於B狀態檢查鎖存器DL1 434-1及DL0 434-0兩者。一旦將單元程式化為B狀態且僅驗證C狀態,則僅需檢查鎖存器DL1 434-1資料以觀察是否存在未經程式化之任何位元。(7) In the process of determining whether or not the upper page is programmed, both the latches DL1 434-1 and DL0 434-0 are checked for the B state. Once the unit is programmed into the B state and only the C state is verified, then only the latch DL1 434-1 data needs to be checked to see if there are any uncommated bits.

注意,在此配置下,在步驟6中,不再需要鎖存器DL0 434-0且其可用以快取資料以進行下一程式化作業。另外,在使用快速通過寫入之實施例中,一旦進入第二緩慢程式化階段,即亦可使得鎖存器DL2 434-2可用於快取資料,但在實務上,實際情形常為此僅可以此方式可用相當短而無法使得經常需要以實施此特徵之額外附加項情有可原之時間週期。Note that in this configuration, in step 6, latch DL0 434-0 is no longer needed and it can be used to cache data for the next stylized job. In addition, in the embodiment using fast pass write, once entering the second slow stylization stage, the latch DL2 434-2 can also be made available for cache data, but in practice, the actual situation is often only This can be used in such a way that it is rather short and does not make it possible to implement additional features that are often required to implement this feature for a probable period of time.

圖15可用以說明上幾幅圖中已描述的以單頁模式進行之快取程式化之許多態樣。圖15展示記憶體內部事件發生(下部"真實忙碌"線)與自記憶體外部觀察(上部"快取忙碌"線)之相對時序。Figure 15 can be used to illustrate many aspects of the cached stylization in the single page mode that have been described in the previous figures. Figure 15 shows the relative timing of internal memory events (lower "real busy" lines) and external memory observations (upper "cache busy" lines).

在時間t 0 處,將待程式化至所選字線(WLn)上之下部頁載入記憶體中。此假定先前未曾快取第一下部頁之資料,因為其將用於後續頁。在時間t 1 處,完成下部頁載入且記憶體開始寫入下部頁。由於此在此點上等效於二進位作業,因此僅需驗證狀態A("pvfyA")且資料鎖存器DL0 434-0可用於接收下一頁資料,此處將下一頁資料取作待於時間t 2 經程式化至WLn中之上部頁,其因此在對下部頁之程式化期間於鎖存器DL0 434-0中經快取。上部頁在時間t 3 處完成載入且可在下部頁於t 4 處一結束時即得以程式化。在此配置下,雖然資料之全部(下部及上部頁)待寫入程式化之實體單位(此處為字線WLn)中,但是記憶體必須自時間t 3 等待至時間t 4 方可寫入上部頁資料,此不同於下文描述之全序列實施例。At time t 0 , the page to be programmed to the lower portion of the selected word line (WLn) is loaded into the memory. This assumes that the data of the first lower page has not been cached previously because it will be used for subsequent pages. At time t 1, the lower page is completed loading and start writing the lower page of memory. Since this is equivalent to a binary job at this point, only the state A ("pvfyA") needs to be verified and the data latch DL0 434-0 can be used to receive the next page of data, where the next page is taken as Waiting for time t 2 is programmed to the upper page of WLn, which is therefore cached in latch DL0 434-0 during the stylization of the lower page. T 3 the upper page finishes loading at time and may be programmable i.e. when a lower page ends at t 4. In this configuration, although all of the data (lower and upper pages) are to be written into the stylized physical unit (herein word line WLn), the memory must wait from time t 3 to time t 4 to be writable. The upper page data is different from the full sequence embodiment described below.

對上部頁之程式化開始於時間t 4 ,其中最初僅驗證B狀態("pvfyB"),在t 5 處添加C狀態("pvfyB/C")。一旦於t 6 處不再驗證B狀態,則僅C狀態需經驗證("pvfyC")且鎖存器DL0 434-0自由。此允許在上部頁完成程式化之同時快取下一資料集合。The stylized upper page begins at time t 4, where initially only the B state verification ( "pvfyB"), the state C is added ( "pvfyB / C") at t 5. Once in the B state is no longer verified t 6, only the C state needs validated ( "pvfyC") and the latch DL0 434-0 freedom. This allows the next data set to be cached while the upper page is being stylized.

如所註,根據如圖15所示的關於快取程式化之單頁演算法,即使上部頁資料可在時間t 3 處可用,記憶體仍將在開始寫入此資料之前等待直至時間t 4 。在向全序列程式作業之轉換(諸如由美國專利申請案11/013,125更為充分揭露之轉換)中,一旦上部頁可用,上部及下部頁資料即可同時經程式化。As note, in accordance with the programmable cache on a single page algorithm shown in Figure 15, even if the upper page data may be available at t 3, the memory will wait until time before starting to write this data. 4 at time t . In the conversion to a full sequence of programs, such as the transitions more fully disclosed in U.S. Patent Application Serial No. 11/013,125, once the upper page is available, the upper and lower page data can be simultaneously programmed.

用於全序列(低至全之轉換)寫入中之快取程式化之演算法如同上文而以下部頁程式化開始。因此,步驟(1)至(4)如同對於以單頁程式化模式進行之下部頁處理之步驟(1)至(4):(1)處理由將資料鎖存器DL0 434-0重設為預設值"1"而開始。此慣例係用以簡化部分頁之程式化,因為將抑制對所選列中不待程式化之單元進行程式化。The algorithm for the cached stylization for full-sequence (low-to-full conversion) writes is as above and the following page stylization begins. Therefore, steps (1) to (4) are the same as steps (1) to (4) for the lower page processing in the single page stylized mode: (1) processing is reset by resetting the data latch DL0 434-0 Start with a preset value of "1". This convention is used to simplify the stylization of partial pages because it will suppress the stylization of cells in the selected column that are not to be programmed.

(2)沿I/O線231將程式化資料供應至DL0 434-0。(2) The stylized data is supplied to the DL0 434-0 along the I/O line 231.

(3)程式化資料將被轉移至DL1 434-1及DL2 434-2(若包括此鎖存器且實施快速通過寫入)。(3) The stylized data will be transferred to DL1 434-1 and DL2 434-2 (if this latch is included and fast pass write is implemented).

(4)一旦將程式化資料轉移至DL1 434-1,即可將資料鎖存器DL0 434-0重設為"1"且在程式化時間期間,可沿I/O線231將下一資料頁載入DL0 434-0,此允許在寫入第一頁之同時對第二頁之快取。(4) Once the stylized data is transferred to DL1 434-1, the data latch DL0 434-0 can be reset to "1" and the next data can be placed along the I/O line 231 during the stylized time. The page loads DL0 434-0, which allows the second page to be cached while the first page is being written.

一旦載入第二頁資料,則若對應於正寫入之下部頁之上部且下部頁尚未結束程式化,則可實施向全序列寫入之轉換。此論述集中於資料鎖存器在該演算法中之使用,其中許多其他細節較充分地揭露於同在申請中、共同讓渡之美國專利申請案11/013,125中。Once the second page of material is loaded, the conversion to the full sequence of writes can be performed if it corresponds to the upper portion of the lower page being written and the lower page has not yet been programmed. This discussion focuses on the use of data latches in the algorithm, many of which are more fully disclosed in the co-pending U.S. Patent Application Serial No. 11/013,125.

(5)在將上部頁資料載入鎖存器DL0 434-0中之後,將在位址區塊中進行一判斷以檢查2頁是否在同一字線及同一區塊上,其中一頁為下部頁且一頁為上部頁。若為如此,則程式化狀態機將觸發下部頁程式化向全序列程式化之轉換(若此為允許的)。在所有未決驗證完成後,接著實現轉變。(5) After the upper page data is loaded into the latch DL0 434-0, a judgment is made in the address block to check whether the two pages are on the same word line and the same block, one of which is the lower part. The page and the page are the upper page. If so, the stylized state machine will trigger the conversion of the lower page stylized to the full sequence stylization (if this is allowed). After all pending verifications are completed, the transition is then implemented.

(6)在程式化序列自下部頁改變為全序列時通常將改變一些作業參數。在例示性實施例中,此等參數包括:(i)若下部頁資料尚未經封鎖,則對於脈衝驗證循環之數目的最大程式化迴路將由下部頁演算法之最大程式化迴路改變為全序列之最大程式化迴路,但已完成之程式化迴路之數目將不由轉換重設。(6) Some job parameters are usually changed when the stylized sequence is changed from the lower page to the full sequence. In an exemplary embodiment, the parameters include: (i) if the lower page data has not been blocked, then the maximum programmed loop for the number of pulse verification cycles will be changed from the largest stylized loop of the lower page algorithm to the full sequence. The maximum stylized loop, but the number of completed stylized loops will not be reset by the conversion.

(ii)如圖16所示,程式化波形以用於下部頁程式化處理中之值VPGM_L開始。若程式化波形已前進至其超過用於上部頁處理中之開始值VPGM_U之處,則在向全序列轉換時,在使階梯繼續上升之前,階梯將降回至VPGM_U。(ii) As shown in Figure 16, the stylized waveform begins with the value VPGM_L in the lower page stylization process. If the stylized waveform has advanced to a point where it exceeds the start value VPGM_U used in the upper page processing, then the ladder will fall back to VPGM_U before the ladder continues to rise during the full sequence transition.

(iii)判定程式化脈衝之步階及最大值之參數不改變。(iii) The parameters determining the step and maximum value of the stylized pulse are not changed.

(7)應執行對記憶體單元之當前狀態之全序列讀取以保證將程式化正確資料用於多級編碼。此確保可能之前在下部頁程式化中已經封鎖但需要進一步程式化以考慮上部頁資料之狀態在全序列開始時不被抑制程式化。(7) A full sequence read of the current state of the memory unit should be performed to ensure that the stylized correct data is used for multi-level encoding. This ensures that it may have been previously blocked in the lower page stylization but needs to be further stylized to take into account that the state of the upper page material is not suppressed at the beginning of the full sequence.

(8)若啟動快速通過寫入,則將同樣更新鎖存器DL2 434-2之資料以反映上部頁程式化資料,因為鎖存器DL2 434-2之資料之前基於僅關於A狀態之下部驗證。(8) If fast pass write is initiated, the data of latch DL2 434-2 will be updated to reflect the upper page of the stylized data, because the data of latch DL2 434-2 was previously based on only the lower part of the A state. .

(9)程式化接著以多級、全序列程式化演算法而恢復。如圖16所示,若下部頁處理中之程式化波形已增加超過上部頁開始位準,則在轉換時波形後退至此位準。(9) Stylization is then resumed with a multi-level, full-sequence stylized algorithm. As shown in Figure 16, if the stylized waveform in the lower page processing has increased beyond the upper page start level, the waveform retreats to this level during the conversion.

圖17為下部頁向全序列轉換寫入處理中所涉及之相對時間的示意性表示。直至時間t 3 ,處理與上文關於圖15中之處理所描述的相同。在已載入上部頁之資料且進行向全序列演算法之轉變的t 3 處,切換驗證處理以包括B狀態連同A狀態。一旦封鎖A狀態之全部,驗證處理即在時間t 4 處切換為檢查B及C狀態。一旦於t 5 處已驗證B狀態,則僅C狀態需檢查且可使一暫存器自由以載入待程式化之下一資料,諸如如在快取忙碌線上所指示的下一字線(WLn+1 )上之下部頁。在時間t 6 處,已快取此下一資料集合且一旦對先前集合之C資料之程式化於t 7 結束,此下一資料集合即開始程式化。另外,在程式化(此處)字線WLn+1 上之下部頁之同時,可將下一資料(諸如相應之上部頁資料)載入開放之鎖存器DL0 434-0中。Figure 17 is a schematic representation of the relative time involved in the lower page to full sequence conversion write process. Until time t 3 , the process is the same as described above with respect to the process in FIG. In the upper page of data has been loaded and performed at t 3 to the full sequence conversion algorithm, the verification process is switched to include the B state in conjunction with the A state. Once the block A of the entire state, i.e., the verification process is switched at t 4 B and C check the state at a time. Once verified at t 5 the B state, only the C state can be checked and a register free to load the next data to be programmable, such as the next word in the cache line indicated by a long line ( WL n+1 ) Upper and lower pages. At time t 6, has been cached data collection and once this next previous collection of stylized profile of C ends at t 7, the next data collection starts this stylized. In addition, while stylizing (here) the lower page of the word line WLn +1 , the next data (such as the corresponding upper page material) can be loaded into the open latch DL0 434-0.

在全序列寫入期間,以獨立給出下部頁與上部頁狀態之方式實施一狀態報告。在程式化序列之結尾,若存在未完成之位元,則可執行對實體頁之掃描。第一掃描可檢查鎖存器DL0 434-0以尋找未完成之上部頁資料,第二掃描可檢查DL1 434-1以尋找未完成之下部頁資料。由於對B狀態之驗證將改變DL0 434-0及DL1 434-1資料,因此應以若位元之臨限值高於A驗證位準則DL1 434-1資料"0"將改變為"1"之方式而執行A狀態驗證。此後期驗證將檢查是否存在任何程式化不足之B位準在A位準通過;若其在A位準通過,則誤差僅存在於上部頁而不存在於下部頁上;若其在A位準未通過,則下部頁及上部頁均具有誤差。During the full sequence of writes, a status report is implemented in a manner that independently gives the lower page and upper page states. At the end of the stylized sequence, if there are unfinished bits, a scan of the physical page can be performed. The first scan can check the latch DL0 434-0 for the unfinished upper page data, and the second scan can check the DL1 434-1 for the unfinished lower page data. Since the verification of the B state will change the DL0 434-0 and DL1 434-1 data, the data "0" will be changed to "1" if the threshold of the bit is higher than the A verification bit criterion DL1 434-1. The way to perform A-state verification. This post-validation will check if there is any stylized B-level pass at the A level; if it passes at the A level, the error exists only on the upper page and not on the lower page; if it is at the A level If it fails, the lower page and the upper page have errors.

若使用快取程式化演算法,則在程式化A及B資料之後,C狀態將經轉移至鎖存器DL1 434-1以完成程式化。在此情形下,對鎖存器之掃描對於下部頁不必要,因為下部頁將已通過程式化而無任何不合格位元。If a cached programming algorithm is used, the C state will be transferred to latch DL1 434-1 to complete the stylization after the programming of the A and B data. In this case, the scan of the latch is not necessary for the lower page because the lower page will have been stylized without any unqualified bits.

本發明之另一例示性實施例集合係關於頁複製作業,其中將資料集合自一位置再定位至另一位置。全部以引用的方式併入本文中的2004年5月13日申請之美國專利申請案第US 10/846,289號;2004年12月21日申請之第11/022,462號;及2004年8月9日申請之第US 10/915,039號;以及美國專利第6,266,273號中描述了資料再定位作業之各種態樣。當將資料自一位置複製至另一者時,常將資料切出以對其進行檢查(以(例如)尋找誤差)、更新(諸如更新標頭)或兩者(諸如校正所偵測到之誤差)。該等轉移亦係為了在無用單元收集作業中使日期固定。本發明之主要態樣允許在寫入作業之驗證階段期間***對開放暫存器之資料讀取,其中接著隨著寫入作業繼續而將此經快取之資料轉移出記憶體裝置,此允許用於切出資料之時間藏於寫入作業之後。Another exemplary embodiment of the present invention is directed to a page copy job in which a collection of data is relocated from one location to another. U.S. Patent Application Serial No. 10/846,289, filed on May 13, 2004, which is incorporated herein by reference in its entirety in its entirety, in Various aspects of data relocation operations are described in U.S. Patent No. 5,915,039, the disclosure of which is incorporated herein by reference. When copying data from one location to another, the data is often cut out to check it (for example, to find errors), to update (such as updating the header), or both (such as corrections detected) error). These transfers are also intended to fix the date in the garbage collection operation. The main aspect of the present invention allows data reading to the open register to be inserted during the verification phase of the write job, wherein the cached data is then transferred out of the memory device as the write operation continues, allowing The time for cutting out the data is hidden behind the write job.

下文存在快取頁複製作業之兩個例示性實施例。在兩種情形下,均描述使用快速通過寫入實施之實施。圖18指示隨著處理進行的鎖存器之例示性配置之部署。Below are two illustrative embodiments of a cache page copy job. In both cases, the implementation using fast pass write implementation is described. Figure 18 illustrates the deployment of an exemplary configuration of latches as the process proceeds.

快取頁複製之第一版本將寫入至下部頁且可包括以下步驟,其中將讀取位址標為M、M+1......且將寫入位址標為N、N+1......:(1)將待複製之頁("頁M")讀取至鎖存器DL1 434-1中。此可為上部頁或下部頁之資料。The first version of the cache page copy will be written to the lower page and may include the following steps, where the read address is labeled M, M+1... and the write address is labeled N, N+1.. ....: (1) The page to be copied ("Page M") is read into the latch DL1 434-1. This can be the information on the upper or lower page.

(2)接著將頁M轉移至DL0 434-0中。(2) The page M is then transferred to DL0 434-0.

(3)接著切出DL0 434-0中之資料且對其進行修改,在此之後將其轉移回鎖存器中。(3) The data in DL0 434-0 is then cut out and modified, after which it is transferred back to the latch.

(4)程式化序列接著可開始。在將待寫入下部頁N中之資料轉移至DL1 434-1及DL2 434-2之後,鎖存器DL0 434-0準備好快取資料。將對此下部頁進行程式化。對於此實施例,程式化狀態機將停於此處。(4) The stylized sequence can then begin. After transferring the data to be written in the lower page N to DL1 434-1 and DL2 434-2, the latch DL0 434-0 is ready to cache data. This lower page will be stylized. For this embodiment, the stylized state machine will stop here.

(5)接著將待複製之下一頁讀取至DL0 434-0中。接著程式化可恢復。於步驟(4)之結尾停止之狀態機將自開始重新開始程式化序列。(5) Next, the next page to be copied is read into DL0 434-0. Then the stylization can be restored. The state machine that stopped at the end of step (4) will restart the stylized sequence from the beginning.

(6)程式化繼續直至下部頁結束。(6) Stylization continues until the end of the lower page.

複製目的頁位址將判定寫入係至下部頁或上部頁。若程式化位址為上部頁位址,則程式化序列將不停止直至程式化結束且將在寫入完成後執行步驟(5)之讀取。Copying the destination page address will determine the write to the lower or upper page. If the stylized address is the upper page address, the stylized sequence will not stop until the end of the stylization and the read of step (5) will be performed after the write is completed.

在第二快取頁複製方法中,可暫停程式化/驗證處理以***一讀取作業且接著重新開始寫入作業(在其停止之處拾起)。接著可切出在此交錯之感應作業期間讀取之資料,同時經恢復之寫入作業繼續。又,此第二處理允許一旦僅C狀態正被驗證且每一位元線上之一鎖存器開放,即在上部頁或全序列寫入處理中使用頁複製機制。第二快取頁複製作業以與第一情形中相同之前三個步驟開始,但接著不同。其可包括以下步驟:(1)將待複製之頁("頁M")讀取至鎖存器DL1 434-1中。此可為下部或上部頁。In the second cache page copy method, the stylization/verification process may be suspended to insert a read job and then restart the write job (pick up where it left off). The data read during this interleaved sensing operation can then be cut out while the resumed write operation continues. Again, this second process allows the page copy mechanism to be used in the upper page or full sequence write process once only the C state is being verified and one of the latches on each bit line is open. The second cache page copy job begins with the same three steps as in the first case, but then differs. It may include the following steps: (1) Reading the page to be copied ("page M") into the latch DL1 434-1. This can be the lower or upper page.

(2)接著將來自頁M之資料轉移至DL0 434-0中。(如同之前一樣,N等等將表示寫入位址,M等等用於讀取位址)。(2) The data from page M is then transferred to DL0 434-0. (As before, N, etc. will mean writing to the address, M, etc. for reading the address).

(3)接著切出DL0 434-0中之資料且對其進行修改,且接著將其轉移回鎖存器。(3) The data in DL0 434-0 is then cut out and modified, and then transferred back to the latch.

(4)狀態機程式化將進入無限等待狀態直至輸入指令(讀取指令)且接著至鎖存器DL0 434-0的對另一頁(如下一頁M+1)之讀取將開始。(4) The state machine will be programmed to enter the infinite wait state until the input command (read command) and then to the other bit of the latch DL0 434-0 (the next page M+1) will begin.

(5)一旦步驟(4)之讀取完成,即將位址切換回字線及區塊位址以將步驟(1至3)中之資料程式化至頁N(此處為下部頁)且程式化得以恢復。(5) Once the reading of step (4) is completed, the address is switched back to the word line and the block address to program the data in steps (1 to 3) to page N (here, the lower page) and the program The recovery was restored.

(6)在對頁M+1之讀取結束之後,可切出資料,對其進行修改且將其返回。若兩頁為同一WL上之相應的上部頁及下部頁,則一旦處理完成,即可將寫入轉換為全序列作業。(6) After the reading of the page M+1 is completed, the data can be cut out, modified, and returned. If two pages are the corresponding upper and lower pages on the same WL, once the processing is complete, the write can be converted to a full sequence of jobs.

(7)如在早先所述之正常快取程式化中一樣,一旦在全序列寫入中完成A及B位準,即將DL0 434-0中之資料轉移至DL1 434-1,且可發布對於另一頁(例如,頁M+2)之讀取指令。若不存在單頁至全序列之轉換,則下部頁將完成寫入且接著上部頁將開始。在完全完成B位準狀態之後,相同的DL0 434-0至DL1 434-1資料轉移將發生,且狀態機將進入等待對於頁M+2之讀取指令的狀態。(7) As in the normal cache stylization described earlier, once the A and B levels are completed in the full sequence write, the data in DL0 434-0 is transferred to DL1 434-1 and can be issued for A read command for another page (eg, page M+2). If there is no single page to full sequence conversion, the lower page will complete writing and then the upper page will begin. After the B-level state is fully completed, the same DL0 434-0 to DL1 434-1 data transfer will occur and the state machine will enter a state waiting for a read command for page M+2.

(8)一旦讀取指令到達,即將位址切換至讀取位址且讀出下一頁(頁M+2)。(8) Once the read command arrives, the address is switched to the read address and the next page is read (page M+2).

(9)一旦讀取完成,即將位址切換回先前之上部頁位址(程式化位址N+1)直至寫入完成。(9) Once the reading is completed, the address is switched back to the previous upper page address (programmed address N+1) until the writing is completed.

如上文所註,例示性實施例除了包括用於保持可經程式化至記憶體單元中之每一者中的(此處,2位元)資料之鎖存器DL0 434-0及DL1 434-1之外還包括用於快速通過寫入技術之下部驗證的鎖存器DL2 434-2。一旦通過下部驗證,即亦可使鎖存器DL2 434-2自由且用以快取資料,但此在例示性實施例中未進行。As noted above, the illustrative embodiments include, in addition to latches DL0 434-0 and DL1 434, for maintaining (here, 2-bit) data that can be programmed into each of the memory cells. In addition to 1, a latch DL2 434-2 for fast verification by the lower part of the write technique is included. Once verified by the lower portion, latch DL2 434-2 can also be freed and used to cache data, but this is not done in the exemplary embodiment.

圖19A及圖19B說明第二快取頁複製方法之相對時序,其中圖19B說明具有全序列寫入轉換之演算法且圖19A說明不具有全序列寫入轉換之演算法。(圖19A及圖19B均由兩個部分構成:開始於對應於t 0 之斷續豎直線A處且以對應於t 5 之斷續豎直線B結束的第一上部部分;係上部部分之延續且以對應於t 5 之斷續豎直線B開始的第二下部部分。在兩種情形中,時間t 5 處之線B在上部部分中與在下部部分中相同,兩部分中僅存在一接縫以允許將其顯示於兩條線上)。19A and 19B illustrate the relative timing of the second cache page copying method, wherein FIG. 19B illustrates an algorithm with full sequence write conversion and FIG. 19A illustrates an algorithm without full sequence write conversion. (Fig. 19A and Fig. 19B are each composed of two parts: a first upper portion starting at an intermittent vertical line A corresponding to t 0 and ending with an intermittent vertical line B corresponding to t 5 ; a continuation of the upper portion And a second lower portion starting from the intermittent vertical line B corresponding to t 5. In both cases, the line B at time t 5 is the same in the upper portion as in the lower portion, and there is only one connection in the two portions Sew to allow it to be displayed on two lines).

圖19A展示一程序,其以讀取在此實例中取作下部頁之第一頁(頁M)而開始,假定先前未快取資料,且以單頁模式而執行,在開始寫入上部頁之前等待直至下部頁結束寫入。程序以時間t 0 處對頁M之讀取(感應頁M(L))而開始,頁M在此處為由此編碼中的A及C位準處之讀取而感應之下部頁。在時間t 1 處讀取完成且可將頁M切出且對其進行檢查或修改。開始於時間t 2 ,藉由於B位準之讀取而感應下一頁(此處為頁M+1,對應於與下部頁M相同之實體的上部頁),其為結束於時間t 3 之程序。在此點上,第一頁(來源於頁M)(下部)準備好被程式化返回至記憶體中頁N處且自頁M+1讀取之資料經保持於鎖存器中且可被轉移出以受到修改/檢查。此等程序中之兩者均可開始於同一時間,在此處為t 3 。藉由使用上文所述之典型時間值,至時間t 4 為止已切出來自頁M+1之資料且已對其進行修改;然而,對於未實施全序列轉換之實施例而言,記憶體將等待直至頁N於時間t 5 處結束以開始將第二讀取頁之資料(來源於頁M+1)寫入頁N+1中。Fig. 19A shows a program which starts by reading the first page (page M) of the lower page in this example, assuming that the data has not been cached before, and is executed in the single page mode, at the beginning of writing the upper page. Wait until the next page ends writing. Program time t M of the read page (page M inductive (L)) is started 0, M where p A and C to read the level of the coding section thereby induced under the page. The reading is completed at time t 1 and page M can be cut out and checked or modified. 2 begins at time t, is read by the quasi-induced B-site in the next page (here page M + 1, corresponding to the upper page and lower page of the same entity M), which is the program ends at time t 3 it. At this point, the first page (from page M) (bottom) is ready to be programmed back to page N in memory and the data read from page M+1 is held in the latch and can be transferred out To be modified/checked. Both of these programs can start at the same time, here is t 3. By using the typical time values described above, the up to time t 4 has been cut out from the data page M + 1 and its modified; however, for an embodiment of full sequence conversion is not implemented, the memory will wait Until page N ends at time t 5 to begin writing the second read page's material (derived from page M+1) to page N+1.

由於頁N+1為上部頁,因此其寫入最初以B位準處之驗證而開始,在時間t 6 處添加C位準。一旦儲存元件於時間t 7 處使目標狀態B全部封鎖(或者達到最大計數),即撤銷B狀態驗證。如上文所述,根據本發明之若干主要態樣,此允許使資料鎖存器自由,暫時中止正在進行之寫入作業,***讀取作業(在與經暫時中止之程式化/驗證作業不同之位址處),寫入接著在其停止之處恢復,且可在經恢復之寫入作業繼續之同時將於經***之寫入作業期間所感應的資料切出。Since the page N + 1 for the upper page, its write verify B level initially at the start, at a time C is added at a level of 6 t. Once the storage elements at time t 7 at the target state B all blocks (or the maximum count is reached), i.e. B revocation status verification. As described above, according to several main aspects of the present invention, this allows the data latch to be free, temporarily suspending the ongoing write job, and inserting the read job (in contrast to the temporarily suspended stylization/verification operation) At the address, the write resumes where it left off, and the data sensed during the inserted write operation can be cut out while the resumed write job continues.

在時間t 7 處關於(此處)下部頁M+2而執行經***之寫入作業。此感應結束於時間t 8 ,且頁N+1之寫入重新拾起,且來自頁M+2之資料同時經切出及修改。在此實例中,頁N+1在頁M+2結束於時間t 10 之前在時間t 9 結束程式化。在時間t 10 處,源自頁M+2之資料之寫入可開始;然而,在此實施例中,替代地,首先執行頁M+3之讀取,此允許將此頁之資料切出及修改藏於開始於時間t 11 的將源自頁M+2之資料寫入頁N+2中之後。程序接著如圖式之早先部分中而繼續,但頁碼改變,其中時間t 11 對應於時間t 3 ,時間t 12 對應於時間t 4 等等,直至複製程序停止。 7 at the time t on (here) lower page M + 2 is performed by the insertion of the write operation. This sensing ends at time t 8 and the write of page N+1 is picked up again, and the data from page M+2 is simultaneously cut out and modified. In this example, page N + 1 page M + 2 ends at time t 10 until at the end of time t. 9 stylized. At time t 10, from page M + written data 2 of the start; however, in this embodiment, alternatively, in this embodiment, is performed first page M read + 3 of, this allows this data page of the cut and modify the hidden begins at time t after the data originating from page M + 2 is written page of the N + 2 11. The program then continues as in the earlier part of the figure, but the page number changes, where time t 11 corresponds to time t 3 , time t 12 corresponds to time t 4 , etc., until the copying process stops.

圖19B再次展示以讀取下部頁(取作下部頁之頁M)而開始且假定先前未快取資料之程序。圖19B不同於圖19A在於其於時間t 4 實施向全序列寫入之轉換。此一般說來將程序加速了如圖19A之時間(t 5 t 4 )。在時間t 4 (=圖19A中之t 5 )處,如先前所述而實施與全序列轉換相關之各種改變。除此之外,程序類似於圖19A者,包括在時間t 7 t 12 之間的本發明之彼等態樣。Fig. 19B again shows a procedure for starting the reading of the lower page (taken as the page M of the lower page) and assuming that the data has not been cached before. FIG 19B is different from FIG 19A t 4 embodiment of the transition to full sequence write at time in its. This generally speeds up the program as shown in Figure 19A ( t 5 - t 4 ). At time t 4 (= t 5 in Fig. 19A), various changes related to the full sequence conversion are performed as previously described. In addition, the procedure is similar to FIG. 19A, including their aspect of the present invention at the time t of between 7 and t 12.

在頁複製程序及此處描述之涉及寫入資料之其他技術中,可遵循以引用的方式併入本文中之美國專利公開案號US-2004-0109362-A1中描述之方法而明智地選擇於給定時間驗證之狀態。舉例而言,在全序列寫入中,寫入處理可開始僅驗證A位準。在A驗證之後,對其進行檢查以觀察是否存在已通過之任何位元。若為如此,則向驗證階段添加B位準。將在所有儲存單位以A位準驗證作為其目標值驗證(或除了基於可設定參數之最大計數)之後將A位準驗證移除。類似地,在B位準處之驗證之後可添加C位準之驗證,其中將在所有儲存單位以B位準驗證作為其目標值驗證(或除了基於可設定參數之最大計數)之後將B位準驗證移除。In the page copying procedure and other techniques described herein that are related to the writing of the material, the method described in U.S. Patent Publication No. US-2004-0109362-A1, which is incorporated herein by reference, is expressly The status of the verification at a given time. For example, in a full sequence write, the write process can begin to verify only the A level. After A verification, it is checked to see if there are any bits that have passed. If so, add the B level to the verification phase. The A level verification will be removed after all storage units are verified as A target value (or in addition to the maximum count based on the settable parameters). Similarly, a C-level verification can be added after verification at the B-level, where the B-bit will be verified after all the storage units have B-level verification as their target value (or in addition to the maximum count based on the settable parameters) Quasi-verification removed.

程式作業期間資料鎖存器中之快取作業Cache operation in the data latch during program operation

關於較佳多狀態編碼而描述具有用於其他作業之背景資料快取之程式化作業。A stylized job with background data cache for other jobs is described with respect to preferred multi-state coding.

對於4態記憶體之例示性較佳"LM"編碼An exemplary preferred "LM" encoding for 4-state memory

圖20A至圖20E說明對於以2位元邏輯代碼("LM"代碼)編碼之4態記憶體之程式化及讀取。此代碼提供容錯性且減輕歸因於Yupin效應之鄰近單元耦合。圖20A說明在每一記憶體單元使用LM代碼儲存兩個位元之資料時4態記憶體陣列之臨限電壓分布。LM編碼不同於習知格雷碼(Gray code)在於上部及下部位元對於狀態"A"及"C"反轉。"LM"代碼已揭示於美國專利第6,657,891號中且具有優勢在於藉由避免需要電荷之較大改變之程式作業而減少鄰近浮動閘極之間的場效耦合。如將於圖20B及圖20C中所見,每一程式化作業導致電荷儲存單位中之電荷的適度改變(如自臨限電壓VT 之適度改變所顯而易見)。Figures 20A-20E illustrate the stylization and reading of 4-state memory encoded in 2-bit logic code ("LM" code). This code provides fault tolerance and mitigates adjacent unit coupling due to the Yupin effect. Figure 20A illustrates the threshold voltage distribution of a 4-state memory array when each memory cell uses the LM code to store two bits of data. The LM code is different from the conventional Gray code in that the upper and lower parts are inverted for the states "A" and "C". The "LM" code is disclosed in U.S. Patent No. 6,657,891 and has the advantage of reducing the field effect coupling between adjacent floating gates by avoiding the need for a programmed operation requiring a large change in charge. As will be seen in Figures 20B and 20C, each stylized job results in a modest change in charge in the charge storage unit (as is apparent from a modest change in the threshold voltage V T ).

對編碼進行設計以使得2個位元(下部及上部)可分別經程式化及讀取。當程式化下部位元時,單元之臨限位準保持於未經程式化之區域中或移動至臨限窗之"中下"區域。當程式化上部位元時,在此等兩個區域中之任一者中之臨限位準進一步前進至稍高(不多於臨限窗之四分之一)之位準。The code is designed such that 2 bits (lower and upper) can be programmed and read separately. When stylizing the lower part, the unit's threshold level remains in the unstylized area or moves to the "lower middle" area of the threshold window. When staging the upper part, the threshold level in either of these two areas advances further to a slightly higher level (not more than a quarter of the threshold window).

圖20B說明使用LM代碼在現有2循環程式化機制中進行之下部頁程式化。容錯LM代碼本質上避免任何上部頁程式化轉變越過任何中間狀態。因此,第一循環下部頁程式化使得邏輯狀態(1,1)轉變為某一中間狀態(x,0),如由將"未經程式化"之記憶體狀態"U"程式化為以(x,0)表示之具有在大於DA 但小於DC 的寬廣分布中之程式化臨限電壓之"中間"狀態所表示。在程式化期間,相對於界線DVA 而驗證中間狀態。Figure 20B illustrates the use of LM code to perform the lower page stylization in the existing 2-loop stylization mechanism. The fault-tolerant LM code essentially prevents any upper page stylized transitions from crossing any intermediate state. Therefore, the lower page stylization of the first loop causes the logic state (1, 1) to transition to an intermediate state (x, 0), as programmed by the "unprogrammed" memory state "U" to ( x, 0) is represented by the "intermediate" state of the programmed threshold voltage in a broad distribution greater than D A but less than D C . During stylization, the intermediate state is verified against the boundary DV A .

圖20C說明使用LM代碼在現有2循環程式化機制中進行之上部頁程式化。在將上部頁位元程式化為"0"之第二循環中,若下部頁位元處於"1",則邏輯狀態(1,1)轉變為(0,1),如由將"未經程式化"之記憶體狀態"U"程式化為"A"所表示。在程式化為"A"期間,驗證係關於DVA 。若下部頁位元處於"0",則藉由自"中間"狀態程式化為"B"而獲得邏輯狀態(0,0)。程式化驗證係關於界線DVB 。類似地,若上部頁將保持於"1",而下部頁已經程式化為"0",則其將需要自"中間"狀態向(1,0)之轉變,如由將"中間"狀態程式化為"C"所表示。程式化驗證係關於界線DVC 。由於上部頁程式化僅涉及向下一鄰近記憶體狀態之程式化,因此自一循環至另一循環無大量電荷改變。設計自"U"至大致"中間"狀態之下部頁程式化以節省時間。Figure 20C illustrates the use of LM code for upper page stylization in an existing 2-loop stylization mechanism. In the second loop of staging the upper page bit to "0", if the lower page bit is at "1", the logic state (1, 1) is changed to (0, 1), as The stylized "memory state"U" is represented by "A". During stylized as "A", the verification is about DV A. If the lower page bit is at "0", the logic state (0, 0) is obtained by staging from "intermediate" state to "B". Stylized verification is about the boundary DV B . Similarly, if the upper page will remain at "1" and the lower page has been programmed to "0", then it will need to transition from the "intermediate" state to (1,0), as will be the "intermediate" state program It is expressed as "C". Stylized verification is about the boundary DV C . Since the upper page stylization involves only the stylization of the next adjacent memory state, there is no significant charge change from one cycle to another. Designed from "U" to roughly "intermediate" state to save time.

在較佳實施例中,實施在較早章節中所提之"快速通過寫入"程式化技術。舉例而言,在圖20C中,最初程式化驗證("pvfyAL ")係關於經設定於低於DVA 之邊緣處的D VAL 。一旦對單元進行於DVAL 處之程式化驗證,則後續程式化將以較精細之級距而進行且程式化驗證(pvfyA)將關於DVA 。因此在程式化作業期間必須鎖存額外轉變態ALOW 以指示已對單元進行關於DAL 之程式化驗證。類似地,若實施QPW以程式化為"B"狀態,則將存在額外轉變態BLOW 待鎖存。對於BLOW 之程式化驗證將關於界線DVBL 且對於"B"之程式化驗證將關於界線DVB 。在處於ALOW 或BLOW 狀態中時,對所述記憶體單元之程式化將藉由對位元線電壓加合適偏壓或藉由修改程式化脈衝而被切換至較緩慢(亦即,較精細)之模式。以此方式,最初可使用較大程式化級距以用於在無超出目標狀態之危險的情況下快速收斂。2005年12月29日申請且題為"Methods for Improved Program-Verify Operations in Non-Volatile Memories"之美國專利申請案序號11/323,596(其全部揭示內容以引用的方式併入本文中)中已揭示"QPW"程式化演算法。In the preferred embodiment, the "fast pass write" stylization technique mentioned in the earlier section is implemented. For example, in Figure 20C, the initial stylized verification ("pvfyA L ") is for DV AL set at an edge below DV A . Once the unit is programmed for DV AL , the subsequent stylization will be done in a finer pitch and the stylized verification (pvfyA) will be about DV A . Therefore, the extra transition state A LOW must be latched during the stylization job to indicate that the unit has been programmed for D AL . Similarly, if QPW is implemented to be programmed to the "B" state, there will be an additional transition state B LOW to be latched. The stylized verification for B LOW will be about the boundary DV BL and the stylized verification for "B" will be about the boundary DV B . When in the A LOW or B LOW state, the programming of the memory cell will be switched to be slower by applying a suitable bias voltage to the bit line voltage or by modifying the stylized pulse (ie, Fine) mode. In this way, a larger stylized step can be initially used for fast convergence without risk of exceeding the target state. U.S. Patent Application Serial No. 11/323,596, filed on Dec. "QPW" stylized algorithm.

圖20D說明瞭解以LM代碼編碼之4態記憶體之下部位元所需的讀取作業。解碼將視是否已對上部頁進行程式化而定。若已對上部頁進行程式化,則讀取下部頁將需要關於劃界臨限電壓DB 之讀取B之一讀取通過。另一方面,若尚未對上部頁進行程式化,則將下部頁程式化為"中間"狀態(圖20B),且讀取B將引起誤差。相反,讀取下部頁將需要關於劃界臨限電壓DA 之讀取A之一讀取通過。為了分辨兩種情形,在對上部頁進行程式化時在上部頁中(通常在附加項或系統區中)寫入旗標("LM"旗標)。在讀取期間,將首先假定已對上部頁進行程式化且因此將執行讀取B作業。若LM旗標經讀取,則假定正確且完成讀取作業。另一方面,若第一讀取未產生旗標,則其將指示尚未對上部頁進行程式化且因此需藉由讀取A作業而讀取下部頁。Figure 20D illustrates the read operation required to understand the location elements below the 4-state memory encoded in the LM code. Decoding will depend on whether the upper page has been programmed. If the upper page has been programmed, reading the lower page will require reading of one of the readings B regarding the demarcation threshold voltage D B . On the other hand, if the upper page has not been programmed, the lower page is programmed into the "intermediate" state (Fig. 20B), and reading B will cause an error. Conversely, reading the lower page will require a read read of one of the read A of the demarcation threshold voltage D A . To distinguish between the two cases, the flag ("LM" flag) is written in the upper page (usually in the add-on or system area) when the upper page is programmed. During the read, it will first be assumed that the upper page has been programmed and thus the read B job will be executed. If the LM flag is read, it is assumed to be correct and the read operation is completed. On the other hand, if the first read does not produce a flag, it will indicate that the upper page has not been programmed and therefore the lower page needs to be read by reading the A job.

圖20E說明瞭解以LM代碼編碼之4態記憶體之上部位元所需的讀取作業。如自圖式為清楚的,上部頁讀取將需要讀取A及讀取C之2次通過讀取,其分別係關於劃界臨限電壓DA 及DC 。類似地,若尚未對上部頁進行程式化,則亦可藉由"中間"狀態干擾上部頁之解碼。再一次,LM旗標將指示是否已對上部頁進行程式化。若尚未對上部頁進行程式化,則讀取資料將被重設為"1"而指示未對上部頁資料進行程式化。Figure 20E illustrates the read operation required to understand the location elements above the 4-state memory encoded in the LM code. As clear from the figure, the upper page read will need to read A and read C twice to read, which are related to the demarcation threshold voltages D A and D C respectively . Similarly, if the upper page has not been programmed, the decoding of the upper page can also be disturbed by the "intermediate" state. Again, the LM flag will indicate if the upper page has been programmed. If the upper page has not been programmed, the read data will be reset to "1" indicating that the upper page data has not been programmed.

以LM代碼及QPW進行之程式作業期間之鎖存器利用Latch utilization during program operation with LM code and QPW

如圖10所示,每一位元線允許讀取/寫入模組沿記憶體陣列之所選列而存取給定記憶體單元。存在於一列之記憶體單元之一頁上並行操作的P個讀取/寫入模組之頁。每一讀取/寫入模組包含耦接至通用處理器500之感應放大器212-1及資料鎖存器430-1。感應放大器212-1經由位元線感應記憶體單元之傳導電流。資料由通用處理器500處理且儲存於資料鎖存器430-1中。藉由耦接至資料鎖存器之I/O匯流排231(見圖13及圖14)而實現記憶體陣列外部之資料交換。在較佳架構中,由沿一列之一游程的p個鄰接記憶體單元形成頁,該等記憶體單元共用相同字線且可由記憶體陣列之p個鄰接位元線存取。在替代架構中,藉由沿一列之偶數或奇數記憶體單元而形成頁。以足以執行各種所需記憶體作業之最少n個鎖存器DL1至DLn而實施資料鎖存器430-1。圖13及圖14說明4態記憶體之較佳組態,其中存在三個鎖存器DL0至DL2。As shown in FIG. 10, each bit line allows the read/write module to access a given memory cell along a selected column of the memory array. A page of P read/write modules operating in parallel on one page of a column of memory cells. Each read/write module includes a sense amplifier 212-1 and a data latch 430-1 coupled to a general purpose processor 500. The sense amplifier 212-1 senses the conduction current of the memory cell via the bit line. The data is processed by general purpose processor 500 and stored in data latch 430-1. Data exchange outside the memory array is achieved by I/O bus 231 (see Figures 13 and 14) coupled to the data latch. In a preferred architecture, pages are formed by p contiguous memory cells traveling along one of the columns, the memory cells sharing the same word line and being accessible by p contiguous bit lines of the memory array. In an alternative architecture, pages are formed by even or odd memory cells along a column. The material latch 430-1 is implemented with a minimum of n latches DL1 to DLn sufficient to perform various desired memory operations. 13 and 14 illustrate a preferred configuration of a 4-state memory in which there are three latches DL0 through DL2.

當前頁程式化期間之下一頁程式化資料載入The next page of stylized data loading during the current page stylization

圖21為說明將下一頁程式化資料載入未使用之資料鎖存器中之背景作業的下部頁程式化之示意時序圖。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。圖20B中說明以LM代碼進行之下部頁程式化,其中將抹除或未經程式化之狀態(1,1)程式化為"中下"或中間狀態(X,0)。在此情形下,一位元(即,下部位元)將足以在未經程式化之"1"狀態與中間"0"狀態之間進行分辨。舉例而言,DL2(見圖13及圖14)可用以儲存下部位元。Figure 21 is a schematic timing diagram illustrating the programming of the lower page of the background job loading the next page of stylized data into the unused data latches. It also shows the behavior of the host, I/O bus, data latch, and memory core. The lower page stylization with the LM code is illustrated in Fig. 20B, in which the erased or unstylized state (1, 1) is programmed into a "middle down" or intermediate state (X, 0). In this case, one bit (ie, the lower part element) will be sufficient to distinguish between the unprogrammed "1" state and the intermediate "0" state. For example, DL2 (see Figures 13 and 14) can be used to store the lower part elements.

在第N頁資料待寫入時,主機最初向記憶體發布寫入指令以將該頁資料寫入至指定位址。此後為將待經程式化的該頁資料發送至記憶體。經由I/O匯流排將程式化資料切入且將其鎖存至每一讀取/寫入模組之DL2中。因此,I/O匯流排在此切入週期(例如可具有300 μs之持續時間)期間暫時忙碌。When the data on page N is to be written, the host initially issues a write command to the memory to write the page data to the specified address. Thereafter, the page material to be stylized is sent to the memory. The stylized data is cut in via the I/O bus and latched into the DL2 of each read/write module. Therefore, the I/O bus is temporarily busy during this hand-in cycle (eg, can have a duration of 300 μs).

下部頁程式化為二進位的且僅需在如藉由DVA 臨限位準劃分的"U"狀態與"中間狀態"(見圖20B)之間分辨。施加至字線之每一程式化脈衝由讀回或程式化驗證跟隨以判定單元是否已達到表示程式化資料之目標狀態。在此情形下,程式化驗證為關於DVA 之("pvfyA")。因此僅需要來自每一讀取/寫入模組之一鎖存器以儲存每一單元之一位元。The lower page is stylized as binary and only needs to be resolved between the "U" state and the "intermediate state" (see Figure 20B) as partitioned by the DV A threshold. Each stylized pulse applied to the word line is followed by a readback or stylized verification to determine if the cell has reached the target state representing the stylized data. In this case, the stylized verification is about DV A ("pvfyA"). Therefore only one latch from each read/write module is needed to store one bit per cell.

關於資料鎖存器,含有程式化資料之DL2積極地用於發生於記憶體陣列或記憶體核心中之當前下部位元程式化作業。因此,正由核心使用之鎖存器之數目為一,而另兩個鎖存器(即DL0及DL1)仍為閒置的。Regarding the data latch, DL2 containing stylized data is actively used for the current lower part metaprogramming that occurs in the memory array or memory core. Therefore, the number of latches being used by the core is one, while the other two latches (ie, DL0 and DL1) are still idle.

在核心處之程式化繼續之同時,兩個閒置之鎖存器及空閒之I/O匯流排可用於設立下一頁程式化資料。主機可發布另一指令以寫入第N+1頁資料且經由I/O匯流排切換資料以鎖存於兩個空閒之鎖存器中之一者(如DL0)中。以此方式,一旦核心完成程式化第N頁,其即可開始對第N+1頁進行程式化而無需等待另一300 μs而使資料切入。While the stylization continues at the core, two idle latches and idle I/O busses can be used to set up the next page of stylized data. The host may issue another instruction to write to the N+1th page and switch the data via the I/O bus to be latched in one of the two free latches (eg, DL0). In this way, once the core finishes staging the Nth page, it can begin to program the N+1th page without waiting for another 300 μs to make the data cut.

在此點處,已使用兩個鎖存器(例如,DL2及DL0),一者用於正在進行的對第N頁(下部頁)之程式化且一者用於快取第N+1頁之程式化資料。因此,多出一鎖存器為空閒的,但對其之利用將視已經快取之第N+1頁為上部頁或下部頁而定。At this point, two latches (eg, DL2 and DL0) have been used, one for the ongoing stylization of the Nth page (lower page) and one for the program to cache the N+1th page. Information. Therefore, an extra latch is idle, but its utilization will depend on whether the N+1th page that has been cached is the upper page or the lower page.

若第N+1頁為通常屬於相同頁單元或字線之上部頁,則在較佳實施例中,必須保留最後空閒之鎖存器以最佳化上部頁之後續程式化。此係由於"快速通過寫入"("QPW")程式化演算法(在早先章節中提及)之實施要求額外鎖存器以儲存旗標來指示是否已將單元程式化為接近於目標狀態。If page N+1 is a page that normally belongs to the same page unit or word line, then in the preferred embodiment, the last free latch must be retained to optimize subsequent programming of the upper page. This implementation of the "Fast Write Through" ("QPW") stylized algorithm (mentioned in the earlier section) requires an additional latch to store the flag to indicate whether the unit has been programmed to be close to the target state. .

若第N+1頁為屬於單元或字線之另一頁之另一下部頁,則可視情況使用最後空閒之鎖存器以在主機提出之情況下對另一第N+2(下部或上部)頁資料進行快取。If page N+1 is another lower page belonging to another page of the cell or word line, the last idle latch may be used as appropriate to perform another N+2 (lower or upper) page data if the host issues it. Cache.

圖22為展示在使用QWP之4態上部頁或全序列程式化之各種階段期間需追蹤的狀態之數目的表。圖20C中說明以LM代碼進行之上部頁或全序列程式化,其中分別將下部頁狀態"U"或(1,1)中之一些及"中間"狀態(X,0)進一步程式化為狀態"A"或(0,1)、"B"或(0,0)及"C"或(1,0)。詳言之,狀態"A"係由"U"程式化而來且狀態"B"及"C"係由"中間"程式化而來。在實施QWP技術用於狀態"A"及"B"但非"C"之情況下,程式化最初需要在總計共五個狀態之基本狀態"A"、"B"及"C"加上"ALOW "及"BLOW "之間進行分辨。在三個位元處於三個鎖存器之情況中,存在23 或九個可能代碼,其對於在彼等六個狀態之間進行分辨而言係足夠的。Figure 22 is a table showing the number of states to be tracked during various stages of the 4-page upper page or full sequence stylization using QWP. Figure 20C illustrates a top page or full sequence stylization with LM code in which some of the lower page states "U" or (1,1) and the "intermediate" state (X,0) are further stylized into states. "A" or (0,1), "B" or (0,0) and "C" or (1,0). In particular, the state "A" is stylized from "U" and the states "B" and "C" are stylized from the "intermediate". In the case of implementing QWP technology for states "A" and "B" but not "C", stylization initially needs to add a total of five states to the basic states "A", "B" and "C" plus " Distinguish between A LOW "and "B LOW ". In the case where three bits are in three latches, there are 23 or nine possible codes that are sufficient for resolution between the six states.

程式化期間之若干階段可隨程式化前進而出現Several stages of the stylization period can occur as stylized progress

"A"完成-在已關於DA 界線而程式化驗證目標為"A"狀態之頁中的所有單元之後。此將需要首先完成關於DAL 界線之程式化驗證。存在四個狀態"L"(程式化封鎖)、"BL "、"B"及"C"需留意。此將需要以兩位元代碼表2CT("A")提供之預定編碼而儲存兩個位元之兩個鎖存器。"A" Completion - after all units in the page that have been programmed to verify the target "A" status with respect to the D A boundary. This will require first completing the stylized verification of the D AL boundary. There are four states "L" (stylized blockade), "B L ", "B", and "C" to be aware of. This would require storing two latches of two bits with a predetermined code provided by the two-dimensional code table 2CT ("A").

"B"完成-在已關於DB 界線而程式化驗證目標為"B"狀態之頁中的所有單元之後。此將需要首先完成關於DBL 界線之程式化驗證。存在四個狀態"L"、"AL "、"A"及"C"需留意。此將需要以兩位元代碼表2CT("B")提供之預定編碼而儲存兩個位元之兩個鎖存器。"B" Completion - after all units in the page that have been programmed to verify the target "B" status with respect to the D B boundary. This will require first to complete the stylized verification of the D BL boundary. There are four states "L", "A L ", "A", and "C" to be aware of. This would require storing two latches of two bits with a predetermined encoding provided by the two-dimensional code table 2CT ("B").

"C"完成-在已關於DC 界線而程式化驗證目標為"C"狀態之頁中的所有單元之後。存在五個狀態"L"、"AL "、"A"、"BL "及"B"需留意。此將需要以三位元代碼表3CT("C")提供之預定編碼而儲存三個位元之三個鎖存器。"C" Completion - after all units in the page that have been programmed to verify the target "C" status with respect to the D C boundary. There are five states "L", "A L ", "A", "B L " and "B" to be aware of. This would require three latches of three bits to be stored in a predetermined code provided by the three-bit code table 3CT ("C").

"A"+"B"完成-在已分別關於DA 界線及DB 界線而程式化驗證目標為"A"狀態及"B"狀態之頁中的所有單元之後。存在兩個狀態"L"及"C"需留意。此將需要以一位元代碼表1CT("A"+"B")提供之預定編碼而儲存一位元之一鎖存器。"A" + "B" is completed - after having on each line D A and D B and stylized boundary verification target page for all the cells "A" and the state of the state "B" in. There are two states "L" and "C" to be aware of. This would require storing one of the one-bit latches with the predetermined encoding provided by the one-bit code table 1CT ("A"+"B").

"A"+"C"完成-在已分別關於DA 界線及DC 界線而程式化驗證目標為"A"狀態及"C"狀態之頁中的所有單元之後。存在三個狀態"L"、"BL "及"B"需留意。此將需要以兩位元代碼表2CT("A"+"C")提供之預定編碼而儲存兩個位元之兩個鎖存器。"A" + "C" is completed - after all the cells in the page that have been programmed to verify the target "A" state and "C" state with respect to the D A boundary and the D C boundary, respectively. There are three states "L", "B L " and "B" to be aware of. This would require storing two latches of two bits in a predetermined code provided by the two-dimensional code table 2CT ("A" + "C").

"B"+"C"完成-在已分別關於DB 界線及DC 界線而程式化驗證目標為"B"狀態及"C"狀態之頁中的所有單元之後。存在三個狀態"L"、"AL "及"A"需留意。此將需要以兩位元代碼表2CT("B"+"C")提供之預定編碼而儲存兩個位元之兩個鎖存器。"B" + "C" is completed - after having on each boundary and D B and D C stylized boundary verification target page for all the cells "B" and the state "C" in the state. There are three states "L", "A L " and "A" to be aware of. This would require storing two latches of two bits in a predetermined code provided by the two-dimensional code table 2CT ("B" + "C").

"A"+"B"+"C"完成-在已分別關於DA 界線、DB 界線及DC 界線而程式化驗證目標為"A"狀態、"B"狀態及"C"狀態之頁中的所有單元之後。已程式化驗證頁之所有目標狀態且完成對該頁之程式化。將不需要鎖存器。"A"+"B"+"C" is completed - the pages of the "A" state, the "B" state, and the "C" state are stylized for the D A boundary, the D B boundary, and the D C boundary, respectively. After all the units in . The target status of the page has been stylized and the stylization of the page has been completed. No latches will be needed.

圖23為說明將下一頁程式化資料載入未使用之資料鎖存器中之背景作業的上部頁或全序列程式化之示意時序圖。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。Figure 23 is a schematic timing diagram illustrating the upper page or full sequence stylization of the background job loading the next page of stylized data into the unused data latches. It also shows the behavior of the host, I/O bus, data latch, and memory core.

當上部頁資料之第N頁待寫入時,必須參考先前程式化之下部頁資料。先前程式化之下部頁已鎖存於每一讀取/寫入模組之DL2中。關於上部頁資料之第N頁,主機最初向記憶體發布寫入指令以將該頁資料寫入至指定位址。此後為將待經程式化的該頁資料發送至記憶體。經由I/O匯流排將程式化資料切入且將其鎖存至每一讀取/寫入模組之DL0中。因此,I/O匯流排在此切入週期(例如可具有300 μs之持續時間)期間暫時忙碌。When the Nth page of the upper page data is to be written, the previous stylized lower page data must be referred to. The previously stylized lower page is latched in DL2 of each read/write module. On page N of the upper page data, the host initially issues a write command to the memory to write the page data to the specified address. Thereafter, the page material to be stylized is sent to the memory. The stylized data is cut in via the I/O bus and latched into the DL0 of each read/write module. Therefore, the I/O bus is temporarily busy during this hand-in cycle (eg, can have a duration of 300 μs).

上部頁或全序列程式化為多狀態的,其中狀態"A"、"B"及"C"分別由DA 、DB 及DC 劃界(見圖20C)。施加至字線之每一程式化脈衝由讀回或程式化驗證跟隨以判定單元是否已達到表示程式化資料之目標狀態。The upper page or full sequence is stylized as multi-state, with states "A", "B", and "C" demarcated by D A , D B , and D C , respectively (see Figure 20C). Each stylized pulse applied to the word line is followed by a readback or stylized verification to determine if the cell has reached the target state representing the stylized data.

如圖22中所示,在程式化期間需要之鎖存器之數目關於程式化已進行至何階段而變化。舉例而言,最初使用所有三個鎖存器。當已程式化驗證所有"A"狀態("A"完成)時,在後續程式化期間記憶體核心僅需要兩個鎖存器(例如,DL2及DL1)以儲存四個可能狀態。此使得一鎖存器(例如,DL0)空閒以用於快取作業。As shown in Figure 22, the number of latches required during stylization varies with respect to the stage to which stylization has taken place. For example, all three latches are initially used. When all "A" states have been programmed ("A" is completed), the memory core only needs two latches (eg, DL2 and DL1) to store the four possible states during subsequent stylization. This frees a latch (eg, DL0) for a cache job.

在核心處之程式化繼續之同時,空閒之鎖存器及空閒之I/O匯流排可用於設立下一頁程式化資料。主機可發布另一指令以寫入第N+1頁資料(下部頁資料)且經由I/O匯流排切換資料以鎖存於空閒之鎖存器DL0中。以此方式,一旦核心完成程式化第N頁,其即可開始對第N+1頁進行程式化而無需等待另一300 μs而使資料切入。將相同考慮應用於如圖22所示存在至少一空閒之鎖存器之其他程式化階段中。While the stylization continues at the core, idle latches and idle I/O busses can be used to set up the next page of stylized data. The host may issue another instruction to write the N+1th page material (lower page material) and switch the data via the I/O bus to be latched in the idle latch DL0. In this way, once the core finishes staging the Nth page, it can begin to program the N+1th page without waiting for another 300 μs to make the data cut. The same considerations apply to other stylized stages in which there is at least one idle latch as shown in FIG.

另一可能性為當程式化進入僅需一鎖存器以執行且因此具有兩個空閒之鎖存器用於快取作業之階段時。舉例而言,如圖22所示,此發生於已程式化驗證"A"及"B"狀態兩者時。在此點上,兩個鎖存器可用。若為了載入(N+1)下部頁資料而用盡一鎖存器,則剩餘一者可用以載入(N+2)上部或下部頁資料。Another possibility is when stylized into a phase where only one latch is needed to execute and thus has two free latches for the cache job. For example, as shown in Figure 22, this occurs when both the "A" and "B" states have been programmed to verify. At this point, two latches are available. If a latch is used to load (N+1) the next page data, the remaining one can be used to load (N+2) the upper or lower page data.

若第(N+1)頁為通常屬於相同頁單元或字線之上部頁,則在較佳實施例中,必須保留最後空閒之鎖存器以最佳化上部頁之後續程式化。此係由於"快速通過寫入"("QPW")程式化演算法(在早先章節中提及)之實施要求額外鎖存器以儲存一或兩個旗標來指示是否已將單元程式化為接近於目標狀態。If the (N+1)th page is a page that normally belongs to the same page unit or word line, then in the preferred embodiment, the last free latch must be retained to optimize subsequent stylization of the upper page. This implementation of the "Quick Write Through" ("QPW") stylized algorithm (mentioned in the earlier section) requires an additional latch to store one or two flags to indicate whether the unit has been programmed to Close to the target state.

若第(N+1)頁為屬於單元或字線之另一頁之另一下部頁,則可視情況使用最後空閒之鎖存器以在主機提出之情況下對另一第(N+2)(下部或上部)頁資料進行快取。If the (N+1)th page is another lower page belonging to another page of the cell or word line, the last idle latch may be used as appropriate to the other (N+2) if the host proposes (lower or upper) page data for quick access.

根據本發明之一態樣,當寫入作業之多個階段關於待追蹤之狀態之數目而變化時,階段相依之編碼致能對可用資料鎖存器之有效利用,藉此允許最大量之剩餘鎖存器用於背景快取作業。According to one aspect of the invention, the phase dependent coding enables efficient use of the available data latches as the plurality of stages of the write operation vary with respect to the number of states to be tracked, thereby allowing the maximum amount of remainder The latch is used for background cache operations.

圖24為說明根據本發明之一般實施例的與當前多階段記憶體作業同時發生之鎖存器作業之流程圖。24 is a flow diagram illustrating a latch operation occurring concurrently with a current multi-stage memory job in accordance with a general embodiment of the present invention.

步驟600:開始執行具有一具有記憶體單元之可定址頁的記憶體陣列之記憶體。Step 600: Begin execution of a memory having a memory array having addressable pages of memory cells.

步驟610:向經定址之頁之每一記憶體單元提供一具有鎖存預定數目之位元之能力的資料鎖存器集合。Step 610: Provide each memory cell of the addressed page with a set of data latches having the ability to latch a predetermined number of bits.

記憶體陣列中之當前多階段記憶體作業Current multi-stage memory operations in memory arrays

步驟620:對記憶體陣列執行當前記憶體作業,該記憶 體作業具有一或多個階段,每一階段與作業狀態之預定集合相關聯。Step 620: Perform a current memory job on the memory array, the memory A body job has one or more phases, each phase being associated with a predetermined set of job states.

藉由有效的階段相依之編碼而使鎖存器自由Free latches with efficient phase-dependent coding

步驟622:對於每一階段提供一階段相依之編碼,以使得對於階段中之至少一些而言,其作業狀態之集合以大體上最小量之位元編碼從而有效地利用資料鎖存器之集合且使空閒資料鎖存器之一子集自由。Step 622: Provide a phase-dependent encoding for each phase such that for at least some of the phases, the set of job states is encoded with a substantially minimum number of bits to effectively utilize the set of data latches and Make a subset of the free data latch free.

同時發生之鎖存器作業Simultaneous latch operation

步驟624:與當前記憶體作業同時發生,以與對於記憶體陣列進行之一或多個後續記憶體作業相關的資料對空閒資料鎖存器之子集執行作業。Step 624: Simultaneously with the current memory job to perform a job on a subset of the free data latches associated with one or more subsequent memory jobs for the memory array.

當前程式化期間之讀取中斷Read interrupt during current stylization

圖25為下部頁程式化之示意時序圖,其說明使用可用鎖存器而進行之讀取中斷作業。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。Figure 25 is a schematic timing diagram of the lower page stylization illustrating the read interrupt operation using the available latches. It also shows the behavior of the host, I/O bus, data latch, and memory core.

在第N頁資料待寫入時,主機最初向記憶體發布寫入指令以將該頁資料寫入至指定位址。此後為將待經程式化的該頁資料發送至記憶體。經由I/O匯流排將程式化資料切入且將其鎖存至每一讀取/寫入模組之DL2中(見圖13及圖14)。因此,I/O匯流排在此切入週期(例如可具有300μs之持續時間)期間暫時忙碌。When the data on page N is to be written, the host initially issues a write command to the memory to write the page data to the specified address. Thereafter, the page material to be stylized is sent to the memory. The stylized data is cut in via the I/O bus and latched into the DL2 of each read/write module (see Figures 13 and 14). Therefore, the I/O bus is temporarily busy during this hand-in cycle (eg, may have a duration of 300 [mu]s).

下部頁程式化為二進位的且僅需在如藉由DA 臨限位準劃分的"U"狀態與"中間狀態"(見圖20A)之間分辨。施加至字線之每一程式化脈衝由讀回或程式化驗證跟隨以判定單元是否已達到表示程式化資料之目標狀態。在此情形下,程式化驗證為關於DA 之("pvfyA")。因此,僅需要來自每一讀取/寫入模組之一鎖存器以儲存每一單元之一位元。The lower page is stylized as binary and only needs to be resolved between the "U" state and the "intermediate state" (see Figure 20A) as partitioned by the D A threshold. Each stylized pulse applied to the word line is followed by a readback or stylized verification to determine if the cell has reached the target state representing the stylized data. In this case, the stylized verification is about D A ("pvfyA"). Therefore, only one latch from each read/write module is needed to store one bit per cell.

關於資料鎖存器,含有程式化資料之DL2積極地用於發生於記憶體陣列或記憶體核心中之當前下部位元程式化作業。因此,正由核心使用之鎖存器之數目為一,而另兩個鎖存器(即DL0及DL1)仍為閒置的。Regarding the data latch, DL2 containing stylized data is actively used for the current lower part metaprogramming that occurs in the memory array or memory core. Therefore, the number of latches being used by the core is one, while the other two latches (ie, DL0 and DL1) are still idle.

在核心處之程式化繼續之同時,兩個閒置之鎖存器及空閒之I/O匯流排可用於讀取作業。讀取作業需要已由當前程式化作業先占之記憶體核心(亦即,記憶體陣列)自身中之感應。然而,讀取作業之實際感應階段通常遠遠短於程式作業(通常為程式化時間之十分之一),從而可中斷後者而***感應作業而不引起效能之較大損失。在感應之後,將讀取資料鎖存於空閒資料鎖存器中之一或多者中。使用者接著可將讀取資料切出至I/O匯流排。此處可節省時間,因為其與記憶體陣列中之程式作業同時發生。While the stylization continues at the core, two idle latches and idle I/O busses are available for read jobs. The read job requires sensing from the memory core (ie, the memory array) itself that has been preempted by the current stylized job. However, the actual sensing phase of a read job is typically much shorter than the program job (usually one tenth of the stylized time), so that the latter can be interrupted and inserted into the sensing operation without causing a significant loss of performance. After sensing, the read data is latched in one or more of the idle data latches. The user can then cut the read data to the I/O bus. This saves time because it coincides with program jobs in the memory array.

因此,在對下部頁進行程式化之同時,主機可發布讀取指令以中斷程式化同時應暫停之要求將程式化狀態儲存於資料鎖存器中。感應另一頁資料且將其鎖存於兩個空閒鎖存器中之一者(如DL0)中。接著程式化可以所儲存之程式化狀態而恢復。在記憶體陣列仍由恢復之程式化所佔據的同時可將資料鎖存器中之讀取資料切出至I/O匯流排。Therefore, while the lower page is being programmed, the host can issue a read command to interrupt the stylization while storing the stylized state in the data latch. Inducts another page of data and latches it in one of two free latches (such as DL0). The stylization can then be resumed by the stored stylized state. The read data in the data latch can be sliced out to the I/O bus while the memory array is still occupied by the resumed stylization.

如早先所描述,在四態(2位元)記憶體之實例中,對於該頁之每一記憶體單元而言較佳鎖存器數目為三。僅需要用以儲存下部頁程式化資料之一鎖存器用於下部頁程式化。此留下兩個空閒鎖存器。在通常之讀取作業中僅需一個空閒鎖存器來鎖存經感應之資料位元。在較佳先行("LA")讀取作業中,需要兩個空閒鎖存器。將在稍後章節中對此進行更詳細描述。As described earlier, in the four-state (2-bit) memory example, the number of preferred latches is three for each memory cell of the page. Only one of the latches for storing the lower page stylized data is needed for the lower page stylization. This leaves two free latches. Only one idle latch is required to latch the sensed data bit in a typical read job. In a preferred look-ahead ("LA") read job, two free latches are required. This will be described in more detail in a later section.

圖26為上部頁程式化之示意時序圖,其說明使用可用鎖存器而進行之讀取中斷作業。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。已結合圖23描述了多階段程式化,其導致在不同階段期間不同數目之空閒的資料鎖存器可用。舉例而言,在已對狀態"A"進行程式化驗證之後,一資料鎖存器空閒,且在已對狀態"A"及狀態"B"進行程式化驗證之後,兩個資料鎖存器空閒。Figure 26 is a schematic timing diagram of the upper page stylization illustrating the read interrupt operation using the available latches. It also shows the behavior of the host, I/O bus, data latch, and memory core. Multi-stage programming has been described in connection with Figure 23, which results in a different number of free data latches available during different phases. For example, after the stylized verification of state "A", a data latch is idle, and after the state "A" and state "B" have been programmatically verified, the two data latches are idle. .

因此,在對狀態"A"進行程式化驗證之後,單一空閒鎖存器可用以鎖存自習知讀取感應之資料。另一方面,若已對狀態"A"及狀態"B"進行程式化驗證,則兩個可用鎖存器將能夠支援上文所解釋之LA讀取。Therefore, after stylized verification of state "A", a single idle latch can be used to latch the self-learning read sensing data. On the other hand, if state "A" and state "B" have been programmatically verified, the two available latches will be able to support the LA read explained above.

對多個快取指令之管理Management of multiple cache instructions

需管理同時發生之記憶體作業以支援快取作業,其中在記憶體核心中執行一記憶體作業,同時於資料鎖存器處快取用於額外未決記憶體作業之資料或經由I/O匯流排而將該資料轉移。習知記憶體裝置通常不具有足夠數目之空閒資料鎖存器來執行快取作業。即使其具有足夠數目之空閒資料鎖存器,仍僅在完成當前記憶體作業之後執行未決記憶體作業(其資料經快取)。It is necessary to manage concurrent memory jobs to support cache operations, where a memory job is executed in the memory core while data for additional pending memory jobs is cached at the data latch or via I/O The data is transferred. Conventional memory devices typically do not have a sufficient number of free data latches to perform a cache operation. Even if it has a sufficient number of free data latches, the pending memory job (its data is cached) is only executed after the current memory job is completed.

圖27說明與典型記憶體作業相關聯之資訊之封裝。當請求記憶體作業記憶體作業時,其接收表示指定記憶體作業之開始的前指令。此後為記憶體陣列中作業發生之位址。在抹除作業之情形下,位址為待抹除的記憶體單元之區塊。在程式化或讀取作業之情形下,位址為待接受執行的記憶體單元之頁。若所指定之作業為程式作業,則將供應程式化資料以載入至資料鎖存器中。當程式化資料處於適當位置時,將發布執行指令以關於可用程式化資料而執行程式作業。若所指定之作業為讀取作業,則將不向記憶體發送資料。將發布執行指令以執行讀取作業。將感應經定址之記憶體單元之頁且將鎖存經感應之資料於資料鎖存器中以最後經由I/O匯流排切出。Figure 27 illustrates the encapsulation of information associated with a typical memory job. When a memory job memory job is requested, it receives a pre-command indicating the start of the specified memory job. This is followed by the address at which the job occurred in the memory array. In the case of an erase job, the address is the block of the memory cell to be erased. In the case of a stylized or read job, the address is the page of the memory unit to be executed. If the specified job is a program job, the stylized data will be supplied to the data latch. When the stylized data is in place, an execution instruction is issued to execute the program job with respect to the available stylized data. If the specified job is a read job, no data will be sent to the memory. An execution instruction will be issued to execute the read job. The page of the addressed memory cell will be sensed and the sensed data will be latched into the data latch for final truncation via the I/O bus.

圖28說明支援簡單快取作業之習知記憶體系統。記憶體系統包括記憶體控制器8,其經由記憶體控制器8而控制記憶體晶片301。記憶體晶片具有由晶片上主機介面/控制電路310控制之記憶體陣列100。控制電路包括管理記憶體陣列之基本記憶體作業之狀態機。主機6經由執行諸如映射及維護的較高級記憶體功能之記憶體控制器8而嚙合記憶體系統。Figure 28 illustrates a conventional memory system that supports simple cache operations. The memory system includes a memory controller 8 that controls the memory chip 301 via the memory controller 8. The memory chip has a memory array 100 that is controlled by a host interface/control circuit 310 on the wafer. The control circuit includes a state machine that manages the basic memory operations of the memory array. The host 6 engages the memory system via a memory controller 8 that performs higher level memory functions such as mapping and maintenance.

狀態信號就緒/忙碌 允許主機或記憶體控制器在記憶體晶片不忙碌時請求記憶體作業。將所請求之記憶體作業保持於緩衝器322中且釋放至狀態機312以在狀態機不執行另一記憶體作業時執行。舉例而言,在記憶體陣列中由狀態機控制而執行記憶體作業MEM OP0。若存在可用之空閒資料鎖存器,則將向控制器發信號以允許將未決記憶體作業MEM OP1發送至記憶體晶片且於緩衝器322中經緩衝。同時,將與MEM OP1相關聯之資料切入記憶體晶片且鎖存至資料鎖存器中。MEM OP0一完成執行,狀態機即釋放緩衝器中之MEM OP1以開始其執行。因此,在習知記憶體系統中,在完成當前記憶體作業之後執行未決記憶體作業。Status Signal Ready/Busy * Allows the host or memory controller to request a memory job when the memory chip is not busy. The requested memory job is held in buffer 322 and released to state machine 312 for execution when the state machine is not performing another memory job. For example, the memory job MEM OP0 is executed by the state machine in the memory array. If there is a free data latch available, the controller will be signaled to allow the pending memory job MEM OP1 to be sent to the memory die and buffered in buffer 322. At the same time, the data associated with MEM OP1 is sliced into the memory chip and latched into the data latch. Once MEM OP0 completes execution, the state machine releases MEM OP1 in the buffer to begin its execution. Therefore, in the conventional memory system, a pending memory job is executed after the current memory job is completed.

在圖28所示之實例中,每一指令在其可開始執行之前必須等待直至最後一者完成,但其資料在最後一者之執行期間經快取。因此,在MEM OP0執行於記憶體核心中之同時,與MEM OP1相關聯之資料1正被鎖存。MEM OP1將在完成MEM OP0之後作用於經快取之資料1。類似地,在MEM OP1執行於記憶體核心中之同時,與MEM OP2相關聯之資料2正被鎖存。此機制阻礙載入同一字線之下部及上部邏輯頁及有效地在同一程式化作業中程式化多個位元之可能性。In the example shown in Figure 28, each instruction must wait until the last one completes before it can begin execution, but its data is cached during the execution of the last one. Therefore, while MEM OP0 is being executed in the memory core, the material 1 associated with MEM OP1 is being latched. MEM OP1 will act on the cached data 1 after completion of MEM OP0. Similarly, while MEM OP1 is executing in the memory core, the material 2 associated with MEM OP2 is being latched. This mechanism hinders the possibility of loading the lower and upper logical pages of the same word line and effectively staging multiple bits in the same stylized job.

存在影響程式作業(尤其對於連續程式化)之效能之兩個因素。第一者係關於載入程式化資料之時間。隨著快閃記憶體容量變得較大,其頁大小亦隨每一新的世代而增加。待受程式化之較大頁資料因此佔用較長時間來載入資料鎖存器。為了增大程式化效能,需要將資料載入時間藏於別處。此藉由在記憶體核心於前景中忙於一程式作業但使其資料鎖存器及I/O匯流排閒置之同時在背景中快取盡可能多之程式化資料而完成。There are two factors that affect the performance of a program (especially for continuous stylization). The first is about the time to load stylized data. As flash memory capacity becomes larger, its page size increases with each new generation. The larger page data to be stylized therefore takes a long time to load the data latch. In order to increase stylized performance, you need to hide the data loading time elsewhere. This is accomplished by fetching as much stylized data as possible in the background while the memory core is busy with a program in the foreground but has its data latches and I/O buss idle.

本發明之一特徵為藉由在程式化期間於背景中將較多頁載入資料鎖存器以使得資料鎖存器一可用即被用於快取未決程式化資料而處理第一因素。此包括允許在同一前景作業期間於背景中快取與一個以上指令相關聯之資料。One feature of the present invention is the processing of the first factor by loading more pages into the data latches in the background during stylization such that the data latches are available for caching pending programmatic data as soon as they are available. This includes allowing data associated with more than one instruction to be cached in the background during the same foreground job.

關於程式化效能之第二因素係關於程式化一頁(尤其關於程式化具有同一字線之多位元單元之頁)之時間。如之前所述,可將多位元單元之頁作為個別單位元頁之集合而處理。舉例而言,可將2位元頁作為兩個稍微獨立之單位元頁(即下部位元頁及上部位元頁)而對其進行程式化及讀取。詳言之,下部位元頁之程式化資料一可用即可對下部位元頁進行程式化。在第二次通過中將上部位元頁程式化至記憶體單元之同一頁且該程式化視已程式化於單元中之下部頁之值而定。以此方式,可在兩個不同時間於兩個單獨之通過中對兩個位元進行程式化。然而,較為有效且較為準確之方式(具有較少程式化干擾)為在稱作"所有位元"或"全序列"之程式化中在單一通過中程式化兩個位元。此僅在所有資料位元在程式化期間可用之情況下為可能的。因此,在實務上,若所有位元可用,則較佳地執行所有位元程式化。另一方面,若僅下部頁資料可用,則將首先對下部頁進行程式化。稍後若屬於同一字線之上部頁資料變得可用,則將在第二次通過中對該頁之單元進行程式化。或者,若上部頁資料在下部頁程式化完成之前變得可用,則將需要停止下部頁程式化且替代地轉為執行所有位元程式化。The second factor about stylized performance is the time of a stylized page (especially about pages that are stylized with multiple bit cells of the same word line). As described earlier, pages of multi-bit cells can be processed as a collection of individual cell pages. For example, a 2-bit page can be programmed and read as two slightly separate unit page (ie, lower part meta page and upper part meta page). In detail, the stylized data of the lower part of the meta page can be used to program the lower part of the meta page. In the second pass, the upper part meta page is programmed to the same page of the memory unit and the stylized view is programmed to the value of the lower page of the cell. In this way, two bits can be programmed in two separate passes at two different times. However, a more efficient and accurate way (with less stylized interference) is to program two bits in a single pass in a stylization called "all bits" or "full sequence." This is only possible if all data bits are available during stylization. Therefore, in practice, if all the bits are available, then all bit stylization is preferably performed. On the other hand, if only the lower page material is available, the lower page will be first programmed. If the data belonging to the top of the same word line becomes available later, the unit of the page will be stylized in the second pass. Alternatively, if the upper page material becomes available before the lower page is stylized, it will be necessary to stop the lower page stylization and instead switch to performing all bit stylization.

圖28所示之機制將不支援在背景中將一個以上指令排入佇列且因此不支援快取一個以上頁之資料。此外,其無法處理以下情形:下部頁程式化過早終止且在所有位元變得可用時替代地轉為執行不同的"所有位元"程式化。The mechanism shown in Figure 28 will not support listing more than one instruction in the background and therefore does not support fetching more than one page of data. In addition, it cannot handle situations where the lower page stylizes prematurely terminates and instead converts to a different "all bits" stylization as all bits become available.

本發明之另一特徵為藉由允許快取對於所有位元程式化為必要之所有位元以使得所有位元程式化可發生而處理第二因素。此外,指令佇列管理器管理多個未決指令且允許特定指令(視其相關聯之資料之狀態而定)在完成之前終止以有利於下一未決指令。Another feature of the present invention is to handle the second factor by allowing the cache to be all bits necessary for all bits to be stylized such that all of the bits are stylized. In addition, the instruction queue manager manages multiple pending instructions and allows specific instructions (depending on the state of their associated data) to terminate prior to completion to facilitate the next pending instruction.

本發明之兩個特徵合作以藉由快取較多程式化資料及允許使用較為有效之程式化演算法而增強程式化效能。The two features of the present invention work together to enhance stylized performance by caching more stylized data and allowing the use of more efficient stylized algorithms.

根據本發明之一態樣,可在將其他多個未決記憶體作業排入佇列之同時執行當前記憶體作業。此外,當滿足特定條件時,此等指令中用於個別作業之一些可合併至組合作業中。在一情形中,當滿足條件以將佇列中之多個未決記憶體作業中之一或多者與在執行中之當前記憶體作業合併時,當前記憶體作業終止且由對合併所得之作業的作業而替代。在另一情形中,當滿足條件以合併佇列中之多個未決記憶體作業中之兩者或兩者以上時,對合併所得之作業的作業將在處於執行中之當前作業完成後開始。According to one aspect of the present invention, the current memory job can be executed while the other plurality of pending memory jobs are queued. In addition, some of the individual jobs used in these instructions can be combined into a combined job when certain conditions are met. In one case, when a condition is met to merge one or more of the plurality of pending memory jobs in the queue with the current memory job in execution, the current memory job is terminated and the resulting job is merged Instead of homework. In another scenario, when the condition is met to merge two or more of the plurality of pending memory jobs in the queue, the job for the merged job will begin after the current job in execution is completed.

一實例為在程式化共用一共同字線之記憶體單元之多位元頁中。可將多個位元中之每一者視作形成二進位邏輯頁之位元。以此方式,2位元記憶體單元之頁將具有下部邏輯頁及上部邏輯頁。3位元記憶體單元之頁將具有另外一中部邏輯頁。可分別對每一二進位邏輯頁進行程式化。因此,對於2位元記憶體單元而言,可在第一次通過中對下部邏輯頁進行程式化且在第二次通過中對上部邏輯頁進行程式化。或者且更為有效地,若關於2個位元之程式化資料可用,則較佳地在單一通過中對多位元頁進行程式化。An example is in a multi-bit page of a memory unit that is programmed to share a common word line. Each of the plurality of bits can be considered to be a bit forming a binary logical page. In this way, the page of the 2-bit memory unit will have the lower logical page and the upper logical page. The page of the 3-bit memory unit will have another central logical page. Each binary logical page can be programmed separately. Thus, for a 2-bit memory unit, the lower logical page can be programmed in the first pass and the upper logical page can be programmed in the second pass. Or, more effectively, if two-bit stylized data is available, it is preferable to program the multi-bit page in a single pass.

視程式化資料之多少位元可用而定,對於多個二進位程式化或經合併且單一通過之多位元程式化而言若干情況為可能的。理想地,若所有位元在程式化之前可用,則在單次通過中對記憶體單元之多位元頁進行程式化。如早先所描述,若僅下部邏輯頁程式化資料可用,則對下部邏輯頁之單位元程式化可開始。隨後,當上部邏輯頁程式化資料可用時,可在第二次通過中對記憶體單元之同一頁進行程式化。另一可能性為上部頁資料在下部頁程式化完成之前變得可用。在彼情形下,為了利用較為有效之單一通過多位元或"全序列"程式化,下部頁程式化終止且由多位元程式化所替代。其如同合併或組合對於下部邏輯頁與上部頁之程式化一般。Depending on how many bits of stylized data are available, several cases are possible for multiple binary stylized or merged and single-pass multi-bit stylization. Ideally, if all of the bits are available prior to stylization, the multi-bit pages of the memory unit are stylized in a single pass. As described earlier, if only the lower logical page stylized data is available, the stylization of the lower logical page can begin. Subsequently, when the upper logical page stylized data is available, the same page of the memory unit can be programmed in the second pass. Another possibility is that the upper page material becomes available before the lower page is stylized. In this case, in order to utilize a more efficient single-pass multi-bit or "full sequence" stylization, the lower page is stylized terminated and replaced by multi-bit stylization. It is like merging or combining for the stylization of the lower logical page and the upper page.

對於具有多位元單位之記憶體,由主機發送之邏輯程式化資料之頁可為下部、上部或一些其他中間邏輯頁之混合物。因此,一般需要快取資料鎖存器允許之盡可能多的程式化資料之頁。此將增大合併屬於記憶體單元之同一頁之邏輯頁以執行多位元程式化的可能性。For memory with multiple bit units, the page of logical stylized data sent by the host can be a mixture of lower, upper, or some other intermediate logical page. Therefore, it is generally necessary to cache as many pages of stylized data as possible allowed by the data latch. This will increase the likelihood of merging logical pages belonging to the same page of memory cells to perform multi-bit stylization.

圖29為說明多個記憶體作業之排入佇列及可能合併之流程圖。向具有核心陣列及用於鎖存與陣列之經定址之頁相關聯之資料的資料鎖存器之記憶體應用用於管理多個記憶體作業之演算法。Fig. 29 is a flow chart showing the arrangement of a plurality of memory jobs and possible merging. The algorithm for managing a plurality of memory jobs is applied to a memory having a core array and a data latch for latching data associated with the addressed pages of the array.

步驟710:提供一先進先出佇列以對待執行於核心陣列中之即將到來的記憶體作業進行排序。Step 710: Provide a first in first out queue to order the upcoming memory jobs to be executed in the core array.

步驟720:在無論何時資料鎖存器可用於快取即將到來之記憶體作業之資料時接受即將到來的記憶體作業進入佇列。Step 720: Accept the incoming memory job into the queue whenever the data latch is available to cache data for the upcoming memory job.

步驟730:判定正執行於核心陣列中之記憶體作業是否可潛在地與佇列中之記憶體作業中之任一者合併。若其潛在地可合併,則前進至步驟740,否則前進至步驟750。Step 730: Determine if the memory job being executed in the core array can potentially be merged with any of the memory jobs in the queue. If it is potentially merging, proceed to step 740, otherwise proceed to step 750.

(就"潛在可合併"而言,其意謂可在單一通過中對與記憶體單元之同一頁相關聯之至少兩個邏輯頁一同進行程式化。舉例而言,在具有2位元記憶體單元之記憶體中,分別用以程式化下部邏輯頁與程式化上部邏輯頁之兩個作業潛在地可合併。類似地,在具有3位元記憶體單元之記憶體中,用以程式化下部邏輯頁與中間頁之作業潛在地可合併。又,用於下部、中間及上部邏輯頁之程式作業潛在地可合併。返回至2位元單元之實例,若下部邏輯頁正於核心陣列中處於執行中,則其在下一程式化係程式化屬於記憶體單元之同一頁之上部邏輯頁的情況下與來自佇列未決之下一程式作業潛在地可合併。另一方面,若上部頁正於核心陣列中處於執行中,則其並非潛在可合併的,因為待程式化之下一未決頁將需要來自於記憶體單元之不同頁。類似考慮應用於記憶體作業為讀取作業之情況中)。(As far as "potentially merging" is concerned, it means that at least two logical pages associated with the same page of a memory unit can be stylized together in a single pass. For example, with 2-bit memory In the memory of the unit, the two jobs for programming the lower logical page and the stylized upper logical page are potentially merged. Similarly, in the memory with the 3-bit memory unit, the lower part is used to program the lower part. The work of logical pages and intermediate pages can potentially be merged. Also, the program operations for the lower, middle, and upper logical pages can potentially be merged. Return to the instance of the 2-bit cell if the lower logical page is in the core array. In execution, if the next stylized system is programmed to belong to the upper logical page of the same page of the memory unit, it may be potentially merged with the program from the pending list. On the other hand, if the upper page is in the middle The core array is in execution, so it is not potentially mergeable, because a pending page to be programmed will require different pages from the memory unit. Similar considerations apply to memory operations. In the case of the read operation).

步驟740:無論何時來自佇列之下一或多個記憶體作業與核心陣列中之記憶體作業可合併時,終止核心中對記憶體作業之執行且開始替代地執行經合併之記憶體作業;否則在執行來自佇列之下一記憶體作業之前等待直至核心中記憶體作業完成。前進至步驟720。Step 740: terminate the execution of the memory job in the core and begin to perform the merged memory job alternately whenever one or more memory jobs from the queue are merged with the memory jobs in the core array; Otherwise wait until the memory job in the core is completed before executing a memory job from the queue. Proceed to step 720.

(就"可合併"而言,其意謂滿足可合併性之條件。在此情形下,下部及上部邏輯頁之程式化資料在其經鎖存於資料鎖存器中之後可用。類似地,"合併之記憶體作業"將對應於一同程式化或感應下部及上部邏輯頁)。(In the case of "mergeable", it means a condition that satisfies the mergeability. In this case, the stylized data of the lower and upper logical pages are available after they are latched in the data latch. Similarly, The "Merge Memory Job" will correspond to the stylized or sensed lower and upper logical pages).

步驟750:等待直至核心中之記憶體作業完成;及無論何時來自佇列之下兩個或兩個以上記憶體作業可合併時,在核心陣列中執行經合併之記憶體作業;否則在核心陣列中執行來自佇列之下一記憶體作業。前進至步驟720。Step 750: Wait until the memory job in the core is completed; and perform the merged memory job in the core array whenever two or more memory jobs from the queue can be merged; otherwise, in the core array The execution is performed from a memory job below the queue. Proceed to step 720.

藉由提供由記憶體作業佇列管理器控制之記憶體作業佇列而完成對多個指令之管理。較佳地將記憶體作業佇列管理器實施為狀態機中控制記憶體陣列中之記憶體作業之執行的模組。Management of multiple instructions is accomplished by providing a memory job queue controlled by a memory job queue manager. Preferably, the memory job queue manager is implemented as a module in the state machine that controls the execution of memory operations in the memory array.

圖30說明併有記憶體作業佇列及記憶體作業佇列管理器之較佳晶片上控制電路之示意方塊圖。晶片上控制電路310'包括用來控制記憶體陣列100(亦見圖28)之基本作業之有限狀態機312'。藉由先進先出堆疊記憶體而實施記憶體作業佇列330以保持任何進入之記憶體作業請求。通常,自主機或記憶體控制器(見圖28)發布記憶體作業請求。Figure 30 illustrates a schematic block diagram of a preferred on-wafer control circuit with a memory operating array and a memory operating array manager. The on-wafer control circuit 310' includes a finite state machine 312' for controlling the basic operation of the memory array 100 (see also FIG. 28). The memory job queue 330 is implemented by FIFO stack memory to maintain any incoming memory job requests. Typically, a memory job request is issued from the host or memory controller (see Figure 28).

將記憶體作業佇列管理器332實施為狀態機312'中之一模組以管理複數個未決及執行之記憶體作業。佇列管理器332基本上排程佇列330中待釋放至狀態機312'中以執行之未決記憶體作業。The memory job queue manager 332 is implemented as one of the modules in the state machine 312' to manage a plurality of pending and executed memory jobs. The queue manager 332 basically schedules pending memory jobs in the queue 330 to be released into the state machine 312' for execution.

當將諸如MEM OP0之記憶體作業自佇列釋放至狀態機之程式暫存器324中時,將在記憶體陣列上由狀態機控制而執行MEM OP0。在任何時候,狀態機均知曉可用的空閒資料鎖存器之數目且此狀態經由信號就緒/忙碌 而傳達至主機/記憶體控制器。若一或多個空閒之資料鎖存器可用,則主機將能夠請求諸如程式化或讀取之額外記憶體作業。因此容許由主機發送之MEM OP1、MEM OP2等等進入佇列330。將由可用之空閒資料記憶體之數目而判定佇列中記憶體作業之最大數目。When a memory job such as MEM OP0 is released from the bank to the program register 324 of the state machine, MEM OP0 is executed by the state machine on the memory array. At any time, the state machine knows the number of free data latches available and this state is communicated to the host/memory controller via signal ready/busy * . If one or more free data latches are available, the host will be able to request additional memory jobs such as stylization or reading. Therefore, the MEM OP1, MEM OP2, and the like transmitted by the host are allowed to enter the queue 330. The maximum number of memory jobs in the queue will be determined by the number of available free data memories.

當記憶體作業在佇列330中處於未決狀態時,佇列管理器332將控制未決記憶體作業自佇列330向狀態機中之程式暫存器324的釋放。此外,其判定是否記憶體作業中之任一者可合併至如結合圖29而描述之組合作業中。在佇列中之兩個或兩個以上之作業可合併之情形下,佇列管理器332將自佇列330釋放此等可合併作業且將在狀態機中之當前作業完成執行之後由狀態機312'執行組合之作業。在佇列中之一或多個作業可與正由狀態機執行之作業合併之情形下,佇列管理器將使得狀態機終止當前執行之作業且替代地執行組合之作業。因此,記憶體作業管理器332與狀態機312'之剩餘部分合作以排程且(可能地)合併多個記憶體作業。When the memory job is pending in the queue 330, the queue manager 332 will control the release of the pending memory job from the queue 330 to the program register 324 in the state machine. In addition, it is determined whether any of the memory operations can be incorporated into the combined operation as described in connection with FIG. In the event that two or more jobs in the queue can be merged, the queue manager 332 will release the merged jobs from the queue 330 and will be executed by the state machine after the current job in the state machine is completed. 312' performs a combined operation. In the event that one or more jobs in the queue can be merged with the job being executed by the state machine, the queue manager will cause the state machine to terminate the currently executing job and instead perform the combined job. Thus, the memory job manager 332 cooperates with the remainder of the state machine 312' to schedule and (possibly) merge multiple memory jobs.

已將本發明描述為使用具有2位元記憶體之實例。只要在當前記憶體作業期間使資料鎖存器自由,即可使用其以快取更多資料用於任何未決記憶體作業。此將允許將更多位元之資料載入可用資料鎖存器中以及增加合併記憶體作業之可能性。熟習此項技術者將易於能夠對具有可各儲存兩個以上位元之資料之單元的記憶體(例如,3位元或4位元記憶體)應用相同原理。舉例而言,在3位元記憶體中,可將記憶體之頁視作具有三個個別位元頁,即下部、中部及上部位元頁。可在記憶體單元之同一頁上於不同時間個別地對此等頁進行程式化。或者,所有三個位元在可用時可以所有位元程式化模式而一同經程式化。此要求將快取程式化指令排入佇列用於許多頁。在2位元記憶體中,可在全序列轉換為可能時一同執行兩個程式化指令。類似地,在3位元記憶體中,三個連續程式化指令可在轉換為所有位元或全序列模式時一同經執行。又,指令佇列管理器將追蹤哪一指令已完成或終止且哪一者為待執行之下一者。以此方式,在程式化期間到達特定記憶體狀態里程碑時,一些資料鎖存器得以自由且可有效地用於快取未決程式化資料。The invention has been described as using an example with 2-bit memory. As long as the data latch is free during the current memory job, it can be used to cache more data for any pending memory jobs. This will allow more bits of data to be loaded into the available data latches and increase the likelihood of merging memory jobs. Those skilled in the art will readily be able to apply the same principles to memory (e.g., 3-bit or 4-bit memory) having cells that can store more than two bits of data. For example, in 3-bit memory, the page of memory can be viewed as having three individual bit pages, ie, lower, middle, and upper part meta pages. These pages can be individually programmed at different times on the same page of the memory unit. Alternatively, all three bits can be stylized together with all bit stylized modes when available. This requirement queues cached instructions into a queue for many pages. In 2-bit memory, two stylized instructions can be executed together when the full sequence conversion is possible. Similarly, in 3-bit memory, three consecutive stylized instructions can be executed together when converted to all or full sequence mode. Again, the command queue manager will track which instruction has completed or terminated and which is the one to be executed. In this way, some data latches are freely and effectively used to cache pending stylized data when a particular memory state milestone is reached during stylization.

抹除期間之快取作業-背景讀取及寫入作業Cache job during erase - background read and write jobs

抹除作業之潛時為快閃儲存系統之整體效能負荷之主要組成部分中之一者。舉例而言,抹除作業之週期可能比程式作業之週期長四或五倍且比讀取作業之週期長十倍。為了改良快閃記憶體之效能,諸如快取作業之背景作業變得非常重要以利用等待抹除作業結束之時間。本發明將在記憶體由記憶體核心中之抹除作業佔用而忙碌時利用資料鎖存器及I/O匯流排。舉例而言,可與抹除作業同時執行用於下一程式作業之資料或自讀取作業輸出之資料。以此方式,當下一程式化或讀取作業確實發生時,彼作業之資料輸入或輸出部分已完成,藉此減少程式化或讀取潛時且增加效能。The latency of the erase operation is one of the major components of the overall performance load of the flash storage system. For example, the erase job cycle may be four or five times longer than the program job cycle and ten times longer than the read job cycle. In order to improve the performance of the flash memory, background work such as a cache job becomes very important to take advantage of the time to wait for the end of the erase job. The present invention utilizes data latches and I/O busses when the memory is busy by erase operations in the memory core. For example, the data for the next program job or the data output from the read job can be executed simultaneously with the erase job. In this way, when the next stylized or read job does occur, the data input or output portion of the job is completed, thereby reducing stylization or read latency and increasing performance.

可以許多方式而實施抹除作業。美國專利第5,172,338號中揭示之一方法藉由交替抹除脈衝發出繼之以驗證而抹除。一旦對單元進行了抹除驗證,即抑制其不受進一步抹除脈衝發出之影響。另一抹除作業(較佳地用於NAND記憶體)包括兩個階段。在第一階段中,存在藉由將電荷自記憶體單元之電荷元件移除至預定"抹除"或"接地"狀態以下之某一臨限位準而進行的抹除。在第二階段中,藉由一系列關於預定"抹除"臨限之軟式程式化/驗證而將經抹除之單元之臨限值收緊為處於精細界定之臨限分布內。Erasing operations can be performed in a number of ways. One of the methods disclosed in U.S. Patent No. 5,172,338 is erased by alternately erasing a pulse followed by verification. Once the unit has been erased, it is suppressed from the effects of further erase pulses. Another erase operation (preferably for NAND memory) involves two phases. In the first stage, there is an erase by removing the charge from the charge element of the memory cell to a certain threshold level below a predetermined "erase" or "ground" state. In the second phase, the threshold of the erased unit is tightened into a finely defined threshold distribution by a series of soft stylization/verification regarding the predetermined "erase" threshold.

根據本發明之一般態樣,在抹除作業發生之同時,任何空閒之資料鎖存器均用以快取與另一未決記憶體作業相關之資料。In accordance with a general aspect of the present invention, any idle data latch is used to cache data associated with another pending memory job while the erase operation occurs.

圖31為說明抹除作業期間在背景中之快取作業的示意流程圖。Figure 31 is a schematic flow chart illustrating the cache operation in the background during the erase operation.

步驟760:向經定址之頁之每一記憶體單元提供一具有鎖存預定數目之位元之能力的資料鎖存器集合。Step 760: Provide each memory cell of the addressed page with a set of data latches having the ability to latch a predetermined number of bits.

步驟770:對指定組之頁執行抹除作業。Step 770: Perform an erase job on the page of the specified group.

步驟780:與抹除作業同時發生,以與對於記憶體陣列進行之一或多個後續記憶體作業相關的資料對資料鎖存器之集合執行作業。Step 780: Simultaneously with the erase job, performing a job on the set of data latches associated with one or more subsequent memory jobs for the memory array.

根據本發明之一態樣,在抹除作業發生之同時,經由I/O匯流排而將用於未決程式作業之程式化資料載入資料鎖存器中。詳言之,在抹除作業之第一階段期間移除電荷時,所有資料鎖存器均可用於快取程式化資料。在抹除作業之第二階段期間軟式程式化發生時,除一資料鎖存器之外的所有資料鎖存器可用於快取程式化資料,因為需要資料鎖存器中之一者來儲存成功驗證軟式程式化之後彼位置處之程式化封鎖狀況。若記憶體架構支援每單元2個位元,則存在至少2個資料鎖存器,每一位元一個。在較佳實施例中,使用額外資料鎖存器以儲存在作業期間出現之特定狀況。因此,視記憶體架構而定,對於2位元單元存在向每一單元提供之至少兩個且較佳地三個資料鎖存器。所有此等資料鎖存器可在抹除之第一階段期間用於快取用途,且除一者之外的所有此等資料鎖存器可在抹除作業之第二階段期間用於快取用途。因此可視抹除階段及記憶體架構而將一或多頁程式化資料載入可用資料鎖存器中。According to one aspect of the present invention, stylized data for pending program jobs is loaded into the data latch via the I/O bus at the same time as the erase operation occurs. In particular, all data latches can be used to cache stylized data when the charge is removed during the first phase of the erase operation. When a soft stylization occurs during the second phase of the erase operation, all data latches except one data latch can be used to cache the stylized data because one of the data latches is required to store successfully. Verify the stylized blockade at the location after the soft stylization. If the memory architecture supports 2 bits per cell, there are at least 2 data latches, one for each bit. In the preferred embodiment, additional data latches are used to store the particular conditions that occur during the job. Thus, depending on the memory architecture, there are at least two and preferably three data latches provided to each cell for a 2-bit cell. All of these data latches can be used for cache access during the first phase of erasing, and all but one of these data latches can be used for cache during the second phase of the erase operation use. Therefore, one or more pages of stylized data are loaded into the available data latches by the erase phase and the memory architecture.

圖32為對記憶體陣列進行之抹除作業之示意時序圖,其說明抹除作業之第一抹除階段期間之程式化資料載入作業。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。如圖中所示,記憶體核心處之抹除作業包括第一抹除階段,隨後為第二軟式程式化/驗證階段。Figure 32 is a schematic timing diagram of an erase operation performed on a memory array illustrating a stylized data loading operation during the first erase phase of the erase operation. It also shows the behavior of the host, I/O bus, data latch, and memory core. As shown in the figure, the erase operation at the core of the memory includes a first erase phase followed by a second soft stylization/verification phase.

在抹除作業之第一階段期間,記憶體陣列或核心經先占,但資料鎖存器及I/O匯流排為空閒以用於背景作業。在此時間期間,可經由I/O匯流排而將程式化資料載入資料鎖存器中。舉例而言,在對於每一單元存在三個資料鎖存器之較佳實施例中,所有此等鎖存器在第一抹除階段期間均可用於快取作業。During the first phase of the erase operation, the memory array or core is preempted, but the data latches and I/O buss are free for background operations. During this time, the stylized data can be loaded into the data latch via the I/O bus. For example, in a preferred embodiment where there are three data latches for each cell, all of these latches can be used for the cache job during the first erase phase.

舉例而言,在第N頁資料待寫入時,主機最初向記憶體發布寫入指令以將該頁資料寫入至指定位址。此後為將待經程式化的該頁資料發送至記憶體。經由I/O匯流排將程式化資料切入且將其鎖存至每一讀取/寫入模組之DL2中(見圖13及圖14)。因此,I/O匯流排在此切入週期(例如可具有300 μs之持續時間)期間暫時忙碌。在三個資料鎖存器可用之情況下,原則上可快取高達三頁之程式化資料。舉例而言,在抹除作業進行之同時可載入第N頁之下部頁部分,或者可順序地載入第N頁之下部及上部頁部分。For example, when the Nth page of data is to be written, the host initially issues a write command to the memory to write the page data to the specified address. Thereafter, the page material to be stylized is sent to the memory. The stylized data is cut in via the I/O bus and latched into the DL2 of each read/write module (see Figures 13 and 14). Therefore, the I/O bus is temporarily busy during this hand-in cycle (eg, can have a duration of 300 μs). In the case where three data latches are available, in principle up to three pages of stylized data can be cached. For example, the erased job can be loaded while the lower page portion of the Nth page is loaded, or the lower and upper page portions of the Nth page can be sequentially loaded.

圖33為對記憶體陣列進行之抹除作業之示意時序圖,其說明抹除作業之軟式程式化/驗證階段期間之程式化資料載入作業。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。Figure 33 is a schematic timing diagram of an erase operation performed on a memory array illustrating a stylized data loading operation during the soft stylization/verification phase of the erase operation. It also shows the behavior of the host, I/O bus, data latch, and memory core.

在抹除作業之第二軟式程式化/驗證階段期間,記憶體陣列或核心亦經先占。然而,如上文所述,除一資料鎖存器以外之所有資料鎖存器及I/O匯流排為空閒的。可將程式化資料載入未由抹除作業使用之資料鎖存器中。舉例而言,在對於每一單元存在三個資料鎖存器之較佳實施例中,軟式程式化/驗證作業僅使用鎖存器中之一者。因此仍存在兩個空閒之鎖存器可用於快取作業。During the second soft stylization/verification phase of the erase operation, the memory array or core is also preempted. However, as described above, all data latches and I/O buss except one data latch are free. Stylized data can be loaded into the data latches that are not used by the erase job. For example, in a preferred embodiment where there are three data latches per cell, the soft stylization/verification job uses only one of the latches. Therefore, there are still two free latches available for the cache job.

舉例而言,在第N頁資料待寫入時,主機最初向記憶體發布寫入指令以將該頁資料寫入至指定位址。此後為將待經程式化的該頁資料發送至記憶體。經由I/O匯流排將程式化資料切入且將其鎖存至每一讀取/寫入模組之DL2中(見圖13及圖14)。因此,I/O匯流排在此切入週期(例如可具有300 μs之持續時間)期間暫時忙碌。在兩個資料鎖存器可用之情況下,原則上可快取高達兩頁之程式化資料。舉例而言,在抹除作業進行之同時可載入第N頁之下部頁部分,或者可順序地載入第N頁之下部及上部頁部分。For example, when the Nth page of data is to be written, the host initially issues a write command to the memory to write the page data to the specified address. Thereafter, the page material to be stylized is sent to the memory. The stylized data is cut in via the I/O bus and latched into the DL2 of each read/write module (see Figures 13 and 14). Therefore, the I/O bus is temporarily busy during this hand-in cycle (eg, can have a duration of 300 μs). In the case where two data latches are available, in principle up to two pages of stylized data can be cached. For example, the erased job can be loaded while the lower page portion of the Nth page is loaded, or the lower and upper page portions of the Nth page can be sequentially loaded.

一般而言,可載入資料鎖存器中之頁之最大數目為記憶體架構以及並行程式化多少平面/組及多少晶片/晶粒及資料傳送率之速度的函數。In general, the maximum number of pages that can be loaded into the data latch is a function of the memory architecture and how many planes/groups and how many wafers/die and data transfer rates are programmed in parallel.

根據本發明之另一態樣,在抹除作業發生時,可***讀取作業且可在抹除作業期間輸出資料鎖存器中之所得讀取資料。較佳地,在不中斷軟式程式化脈衝自身之情形下將讀取作業***於軟式程式化/驗證作業之間。一旦將資料感應且鎖存至未使用之資料鎖存器中,即可在抹除於陣列內部進行時經由I/O匯流排而將資料輸出至主機系統。此特徵對於隱藏系統附加項以(例如)執行讀取擦洗作業及其他系統維護而言為理想的。According to another aspect of the present invention, when an erase job occurs, a read job can be inserted and the resulting read data in the data latch can be output during the erase job. Preferably, the read job is inserted between the soft stylization/verification jobs without interrupting the soft stylized pulses themselves. Once the data is sensed and latched into the unused data latches, the data can be output to the host system via the I/O bus when erased inside the array. This feature is ideal for hiding system add-ons, for example, to perform read scrubbing operations and other system maintenance.

在先前技術之系統中,當抹除作業被中斷時,其將需要自循環開始處重新開始。此可為非常耗時的(特別在NAND記憶體中)。In prior art systems, when the erase job was interrupted, it would need to restart from the beginning of the cycle. This can be very time consuming (especially in NAND memory).

可將讀取作業***於軟式程式化與抹除驗證脈衝之間。可將與軟式程式化脈衝之數目一樣多之讀取***抹除作業中。感應時間為額外時間,但與整體軟式程式化/驗證作業相比具有較短持續時間。益處在處於與正在進行中之程式化/驗證作業並行發生之狀態中的切出讀取資料中獲得。讀取作業亦可用以在管理內部控制及資料管理時執行背景作業。A read job can be inserted between the soft stylized and erase verify pulses. Reads as many as the number of soft stylized pulses can be inserted into the erase job. Sensing time is extra time, but has a shorter duration than the overall soft stylization/verification job. The benefit is obtained in the cut-out read data in a state that occurs in parallel with the ongoing stylization/verification job. Read jobs can also be used to perform background jobs while managing internal controls and data management.

讀取在快閃儲存系統中於抹除期間之一有用應用在於實施讀取擦洗作業以將所儲存之資料保持於良好狀況。週期性地讀取記憶體之儲存資料之部分以檢查單元中之程式化電荷是否隨時間而改變或在其環境中改變。若為如此,則藉由以適當裕度再程式化單元而對其進行校正。美國專利第7,012,835號中已揭示讀取擦洗之各種機制,該專利之全部揭示內容以引用的方式併入本文中。由於讀取擦洗為主機之作業之外部的系統作業,因此將讀取擦洗藏於一些其他作業之後為最佳的,其中記憶體無論如何均將為忙碌的。在此情形下,在抹除作業期間,可***讀取擦洗作業以使得可隱藏讀取潛時。One useful application for reading during flash erasing in a flash memory system is to perform a read scrubbing operation to maintain the stored data in good condition. The portion of the memory stored data is periodically read to check if the stylized charge in the cell changes over time or changes in its environment. If so, it is corrected by reprogramming the unit with an appropriate margin. Various mechanisms for reading scrubbing have been disclosed in U.S. Patent No. 7,012,835, the disclosure of which is incorporated herein by reference. Since the reading scrub is a system job external to the job of the host, it is best to hide the reading scrub after some other work, where the memory will be busy anyway. In this case, during the erase job, the read scrubbing job can be inserted so that the read latency can be hidden.

圖34為對記憶體陣列進行之抹除作業之示意時序圖,其說明***之讀取作業及使用可用鎖存器而進行之所得資料輸出作業。同時展示主機、I/O匯流排、資料鎖存器及記憶體核心之行為。如圖中所示,在抹除作業之第二階段中,作業為軟式程式化/驗證。較佳地在不中斷任何軟式程式化脈衝之完成的情形下***一或多個讀取作業。Figure 34 is a schematic timing diagram of an erase operation performed on a memory array, illustrating the read read operation and the resulting data output operation using the available latches. It also shows the behavior of the host, I/O bus, data latch, and memory core. As shown in the figure, in the second phase of the erase operation, the job is soft stylized/verified. Preferably, one or more read jobs are inserted without interrupting the completion of any soft stylized pulses.

在晶片處於抹除作業之第二階段中時,用於軟式程式化/驗證之演算法將執行。諸如忙碌/就緒 (未圖示)之狀態信號將以信號表明記憶體核心忙於內部抹除作業。同時,如快取忙碌/快取就緒 (未圖示)之另一狀態信號將自忙碌變為就緒以接受讀取指令輸入。讀取指令一進入,快取忙碌/快取就緒 即轉為忙碌以防止另一指令進入。讀取指令接著將等待直至當前軟式程式化脈衝在內部完成方可對同一晶片中之另一經定址的區塊執行。在讀取完成後,將位址變回先前執行之抹除區塊。軟式程式化/驗證作業可對於抹除區塊而恢復。The algorithm for soft stylization/verification will be executed while the wafer is in the second phase of the erase operation. A status signal such as Busy/Ready * (not shown) will signal that the memory core is busy with the internal erase operation. At the same time, another status signal such as cache busy/cache ready * (not shown) will change from busy to ready to accept the read command input. Once the read command is entered, the cache is busy/cache ready * and it is busy to prevent another command from entering. The read instruction will then wait until the current soft stylized pulse is completed internally to execute on another addressed block in the same wafer. After the read is complete, the address is changed back to the previously executed erase block. Soft stylization/verification jobs can be recovered for erasing blocks.

同時,可將資料鎖存器中之讀取資料切出。切出時間通常遠長於讀取時間。舉例而言,讀取時間為大約25 μs,而切出時間為大約200 μs。因此將讀取***於抹除作業中之益處為自另外在等待抹除結束時浪費之時間搶救約200 μs。At the same time, the read data in the data latch can be cut out. The cut out time is usually much longer than the read time. For example, the read time is approximately 25 μs and the cut-out time is approximately 200 μs. Therefore, the benefit of inserting the read into the erase operation is to rescue about 200 μs from the time that was otherwise wasted while waiting for the end of the erase.

可在抹除期間在抹除時間允許之情況下將此快取讀取***盡可能多次。然而,過多讀取可延長總抹除時間且讀取可能招致的抹除作業之時間損失與自讀取搶救之切換時間之間的平衡將受到衝擊。若抹除期間在一或多個***之讀取之後仍存在剩餘空閒時間,則可如早先章節中所述而使用可用資料鎖存器以快取任何程式化資料。若載入程式化資料,則程式作業僅可在整個抹除作業完成之後開始。必須保留足夠之空閒鎖存器用於對程式作業之適當執行,因此在多數情形下在載入程式化資料之後其他快取作業將為不可能的。This cache read can be inserted as many times as possible during the erase period with the erase time allowed. However, too much reading can extend the total erase time and the balance between the time lost by the read erase operation and the switching time from the read rescue will be affected. If there is still free time remaining after one or more inserts have been erased during the erase, the available data latches can be used to cache any stylized data as described in the earlier section. If the stylized data is loaded, the program job can only be started after the entire erase job is completed. Sufficient free latches must be reserved for proper execution of program jobs, so in most cases other cache jobs will not be possible after loading stylized data.

圖35為說明圖31之步驟780中在抹除作業期間在背景中用於讀取擦洗應用之特定快取作業的示意流程圖。35 is a schematic flow diagram illustrating a particular cache job for reading a scrubbing application in the background during an erase operation in step 780 of FIG.

將圖31所示之步驟780進一步清楚表示為如下:步驟782:暫停抹除作業以感應一指定頁。Step 780, shown in Figure 31, is further clearly indicated as follows: Step 782: Pause the erase job to sense a designated page.

步驟784:在將用於指定頁之資料鎖存於資料鎖存器之後恢復抹除作業。Step 784: Resume the erase operation after latching the data for the specified page into the data latch.

步驟786:在抹除作業期間輸出用於指定頁之資料。Step 786: Output data for specifying the page during the erase job.

步驟788:排程指定頁以在輸入資料含有誤差之情況下進行再程式化。Step 788: Schedule the specified page to reprogram the input data with errors.

至此為止對快取讀取之描述大部分係關於較佳抹除作業之第二階段而進行。較佳抹除作業為如下之抹除作業:第一階段為抹除所有單元至預定臨限以下之某一臨限位準且第二階段為將單元軟式程式化至預定臨限。如上文所述,此抹除機制較佳地用於具有NAND結構之快閃記憶體,因為其需要相當準確之基態且藉由對N型井加偏壓而抹除記憶體,此耗費時間。因此,較佳地在軟式程式化之前一同執行所有抹除。在使用抹除脈衝發出/驗證/抑制之機制的另一記憶體架構中,亦預期快取作業。舉例而言,可在循環之驗證部分期間***讀取作業。The description of the cache read so far has mostly been carried out with respect to the second stage of the preferred erase operation. Preferably, the erase operation is an erase operation as follows: the first stage is to erase all units to a certain threshold level below the predetermined threshold and the second stage is to soft-program the unit to a predetermined threshold. As described above, this erasing mechanism is preferably used for a flash memory having a NAND structure because it requires a relatively accurate ground state and erases the memory by biasing the N-well, which is time consuming. Therefore, it is preferable to perform all erasures together before the soft stylization. In another memory architecture that uses the mechanism of erase pulse issuance/verification/suppression, a cache operation is also expected. For example, a read job can be inserted during the verification portion of the loop.

圖36說明抹除期間之優先背景讀取。當讀取恰於抹除作業之前發生以使得無需中斷抹除作業時,此為更佳之快取讀取。此在於抹除作業開始之前已知讀取作業的情況下為可能的。舉例而言,主機可能具有一未決之讀取請求或者若記憶體系統具有經排程之某一讀取作業。或者,一智慧演算法可能預見下一讀取可能在何處且排程該讀取。即使稍後弄清楚其為失誤,亦將不招致嚴重損失。若其為一命中,則其可利用抹除時間以切出讀取資料。Figure 36 illustrates a prioritized background read during erase. This is a better cache read when the read occurs just before the erase job so that the erase job is not interrupted. This is possible in the case where the read job is known before the start of the erase job. For example, the host may have a pending read request or if the memory system has a scheduled read job. Alternatively, a smart algorithm may foresee where the next read may be and schedule the read. Even if it is later made clear that it is a mistake, it will not incur serious losses. If it is a hit, it can use the erase time to cut out the read data.

可組合抹除作業期間快取讀取資料及快取程式化資料之兩個態樣以提供進一步之靈活性來最小化整體系統或記憶體附加項。即使在多平面及多晶片資料輸入作業之情況下,資料輸入時間亦可能未充分利用抹除作業所招致之忙碌時間。在該等情形下,亦可添加讀取作業及/或程式作業以充分利用抹除時間。Two modes of fetching data and caching stylized data during a erase erase operation can be combined to provide further flexibility to minimize overall system or memory add-ons. Even in the case of multi-plane and multi-wafer data entry operations, the data entry time may not take full advantage of the busy time incurred by the erase operation. In such cases, read jobs and/or program jobs may also be added to take advantage of the erase time.

讀取期間之快取作業-背景讀取及寫入作業Cache jobs during read-background read and write jobs

在順序地讀出許多頁時通常實施快取讀取以節省時間。可在切出先前感應之頁的時間期間隱藏對一頁之感應以使得用於感應之時間不招致使用者之額外等待時間。一普通機制將在切出當前頁時感應下一頁。A cache read is typically performed to save time when multiple pages are read sequentially. The sensing of a page can be hidden during the time when the previously sensed page is cut out so that the time for sensing does not incur additional waiting time for the user. A normal mechanism will sense the next page when the current page is cut.

圖37示意地說明典型讀取快取機制。在先前循環中感應第(n-1)頁且將其鎖存於資料鎖存器中。在時間t0處,如由T(n-1)所指示而經由I/O匯流排自資料鎖存器切出第(n-1)頁。在切換發生之同時,可如S(n)所指示而感應且鎖存第n頁。在t2處,完成對第(n-1)頁之切換且因此其可繼之以如由T(n)所指示的自資料鎖存器切換第n頁之資料。類似地,在切出第n頁資料時,可如S(n+1)所指示而感應且鎖存第(n+1)頁之資料。可緊於第n頁完成切換之後切換此第(n+1)頁。理想地,資料鎖存器及I/O匯流排在整個讀取快取期間完全處於使用中以使得任何閒置時間得以最小化。Figure 37 schematically illustrates a typical read cache mechanism. The (n-1)th page is sensed in the previous loop and latched in the data latch. At time t0, the (n-1)th page is cut from the data latch via the I/O bus as indicated by T(n-1). While the switching occurs, the nth page can be sensed and latched as indicated by S(n). At t2, the switching of the (n-1)th page is completed and thus it can be followed by the switching of the nth page from the data latch as indicated by T(n). Similarly, when the nth page of data is cut out, the data of the (n+1)th page can be sensed and latched as indicated by S(n+1). This (n+1)th page can be switched after the switch is completed on the nth page. Ideally, the data latches and I/O buss are fully in use throughout the read cache to minimize any idle time.

根據本發明之一態樣,提供讀取快取機制用於具有最小化記憶體單元之間的擾動(Yupin效應)之需要的多狀態記憶體單元之情形。在較佳實施例中,使用有效讀取快取機制用於以"LM"編碼而編碼且以先行("LA")校正而讀取之記憶體。"LM"編碼及"LA"校正均需要除僅僅切換讀取資料以外的額外鎖存器及匯流排行為。結合圖37而描述之習知機制的直接應用將不產生最佳讀取快取。In accordance with an aspect of the present invention, a read cache mechanism is provided for use with a multi-state memory cell having the need to minimize the disturbance between memory cells (Yupin effect). In the preferred embodiment, a valid read cache mechanism is used for memory encoded with "LM" encoding and read with a first ("LA") correction. Both the "LM" code and the "LA" correction require additional latch and bus behavior in addition to simply switching the read data. The direct application of the conventional mechanism described in connection with Figure 37 will not result in an optimal read cache.

隨著半導體記憶體中之日益提高之整合度,記憶體單元之間歸因於所儲存之電荷的電場之擾動(Yupin效應)在細胞間間距正在收縮時變得愈來愈明顯。較佳地使用LM編碼來對記憶體之多狀態記憶體單元進行編碼,以最佳次序程式化記憶體中之頁,且使用LA校正而讀取經程式化之頁。改良之讀取作業將實施最佳快取作業。With the increasing integration in semiconductor memory, the disturbance of the electric field due to the stored charge between memory cells (Yupin effect) becomes more and more apparent as the intercellular spacing is shrinking. The LM code is preferably used to encode the multi-state memory cells of the memory, to program the pages in the memory in an optimal order, and to read the stylized pages using LA correction. Improved read jobs will implement the best cache.

對於LM代碼之快取讀取演算法Cache code read algorithm for LM code

當待讀取之頁為多狀態時,讀取快取之實施需滿足所使用之多狀態編碼之要求。如之前結合圖20A至圖20E而描述,用於多狀態記憶體之LM編碼本質上使記憶體單元中經程式化之電荷在不同程式化通過之間的改變最小化。所示之實例係關於2位元記憶體,其用於編碼每一單元中如由三個不同劃界臨限值(例如,DA 、DB 、DC )而劃界之四個可能記憶體狀態(例如,"U"、"A"、"B"、"C")。舉例而言,在2位元記憶體單元中,對下部邏輯頁之程式化至多將臨限位準推進為略低於單元之臨限窗之中部。後續上部邏輯頁程式化將現有臨限位準進一步推進約距離之另一四分之一。因此,自第一下部至第二最終上部程式化通過,淨改變至多為臨限窗之大約四分之一,且此將為單元自其沿一字線之相鄰者處可能經歷之擾動的最大量。When the page to be read is in multiple states, the implementation of the read cache needs to meet the requirements of the multi-state coding used. As previously described in connection with Figures 20A-20E, LM encoding for multi-state memory essentially minimizes the change between stylized charges in memory cells between different stylized passes. The example shown is for 2-bit memory, which is used to encode four possible memories in each cell that are delimited by three different demarcation thresholds (eg, D A , D B , D C ). Body state (for example, "U", "A", "B", "C"). For example, in a 2-bit memory cell, the stylization of the lower logical page pushes the threshold level up to just below the threshold window of the cell. Subsequent stylization of the upper logical page further advances the existing threshold level by another quarter of the distance. Thus, from the first lower portion to the second final upper portion, the net change is at most about a quarter of the threshold window, and this will be the disturbance that the unit may experience from its neighbors along the word line. The maximum amount.

LM編碼之一特徵在於可單獨地考慮兩個位元(下部及上部位元)中之每一者。然而,對下部位元頁之解碼將視是否已對上部頁進行程式化而定。若已對上部頁進行程式化,則讀取下部頁將需要關於劃界臨限電壓DB 之讀取B之一讀取通過。若尚未對上部頁進行程式化,則讀取下部頁將需要關於劃界臨限電壓DA 之讀取A之一讀取通過。為了分辨兩種情形,在對上部頁進行程式化時在上部頁中(通常在附加項或系統區中)寫入旗標("LM"旗標)。在對下部位元頁之讀取期間,將首先假定已對上部頁進行程式化且因此將執行讀取B作業。若LM旗標經讀取,則假定正確且完成讀取作業。另一方面,若第一讀取未產生旗標,則其將指示尚未對上部頁進行程式化且因此需藉由讀取A作業而再讀取下部頁。One of the features of the LM code is that each of the two bits (lower and upper part elements) can be considered separately. However, the decoding of the lower part meta page will depend on whether the upper page has been programmed. If the upper page has been programmed, reading the lower page will require reading of one of the readings B regarding the demarcation threshold voltage D B . If the upper page has not been programmed, reading the lower page will require a read of one of the read A of the demarcation threshold voltage D A . To distinguish between the two cases, the flag ("LM" flag) is written in the upper page (usually in the add-on or system area) when the upper page is programmed. During the reading of the lower part meta page, it will first be assumed that the upper page has been programmed and thus the read B job will be executed. If the LM flag is read, it is assumed to be correct and the read operation is completed. On the other hand, if the first read does not produce a flag, it will indicate that the upper page has not been programmed and therefore the lower page has to be read again by reading the A job.

對上部位元頁讀取之解碼將需要作業讀取A及讀取C,其分別關於劃界臨限電壓DA 及DC 。類似地,若尚未對上部頁進行程式化,則上部頁之解碼亦可經干擾。再一次,LM旗標將指示是否已對上部頁進行程式化。若尚未對上部頁進行程式化,則讀取資料將被重設為"1"而指示未對上部頁資料進行程式化。Decoding of the upper page meta page read will require job read A and read C, respectively, with respect to demarcation threshold voltages D A and D C . Similarly, if the upper page has not been programmed, the decoding of the upper page can also be disturbed. Again, the LM flag will indicate if the upper page has been programmed. If the upper page has not been programmed, the read data will be reset to "1" indicating that the upper page data has not been programmed.

在使用LM編碼而實施對記憶體之快取讀取時,存在需要檢查與資料儲存於同一區上之LM旗標的額外考慮。為了使狀態機檢查LM旗標,其將需要經由I/O匯流排而自資料鎖存器輸出。此將需要對I/O匯流排進行配置以在具有快取之讀取作業期間除了切換所感應之資料之外用於輸出LM旗標。When performing a cache read of a memory using LM encoding, there is an additional consideration that it is necessary to check the LM flag stored on the same area as the data. In order for the state machine to check the LM flag, it will need to be output from the data latch via the I/O bus. This would require configuring the I/O bus to be used to output the LM flag in addition to switching the sensed data during a read operation with a cache.

圖38A為關於以LM代碼編碼之邏輯頁之快取讀取的示意時序圖。在感應當前頁之同時切換上一頁資料之一般機制類似於圖37所示之習知讀取之機制。然而,以LM代碼進行之感應由於潛在地需要進行兩次感應通過(LM旗標之檢查在其間)而為複雜的。Figure 38A is a schematic timing diagram for a cache read of a logical page encoded in LM code. The general mechanism for switching the previous page of data while sensing the current page is similar to the conventional reading mechanism shown in FIG. However, the induction with the LM code is complicated by the potential need to perform two inductive passes (the inspection of the LM flag is in between).

在時間t0處,如由T(n-1)所指示而將上一循環中所感應之第(n-1)邏輯頁自資料鎖存器切出至I/O匯流排。同時,S1 (n)感應下一邏輯頁(n)。在LM編碼之情況下,需分辨兩種情形:對下部位元邏輯頁之讀取;及對上部位元邏輯頁之讀取。At time t0, the (n-1)th logical page sensed in the previous cycle is clipped from the data latch to the I/O bus as indicated by T(n-1). At the same time, S 1 (n) senses the next logical page (n). In the case of LM coding, two situations need to be resolved: reading the logical page of the lower part meta; and reading the logical page of the upper part meta.

對於讀取下部位元邏輯頁之情形,較佳感應將以對於已對上部邏輯頁進行程式化之假定而開始,因此第一感應S1 (n)將處於關於劃界臨限電壓DB 之讀取B處。在t1處完成S1 (n)且將產生LM旗標。然而,其僅可在I/O匯流排完成切換第(n-1)頁之後的t2處輸出。在將LM旗標傳達至狀態機之後,對其進行檢查以判定上部頁是否存在。若LM旗標經設定,則假定正確且下部位元頁經正確讀取。已鎖存之頁(n)之資料準備好在下一循環中被切出。For the case of reading the lower-part logical page, the preferred sensing will begin with the assumption that the upper logical page has been programmed, so the first sensing S 1 (n) will be in relation to the demarcation threshold voltage D B Read B. S 1 (n) is completed at t1 and an LM flag will be generated. However, it can only be output at t2 after the I/O bus is completed switching the (n-1)th page. After the LM flag is communicated to the state machine, it is checked to determine if the upper page is present. If the LM flag is set, it is assumed that the correct and lower part meta page is correctly read. The data of the latched page (n) is ready to be cut out in the next cycle.

對於讀取上部位元邏輯頁之情形,S1 (n)將逐步通過分別關於劃界臨限電壓DA 及DC 之讀取A及讀取C。上部位元頁之所感應之資料將儲存於DL2中且DL0資料鎖存器用於切出資料(見圖13及圖14)。在t2處,將DL2的感應之資料轉移至DL0。又,在於第(n-1)頁之切換之結尾處輸出LM旗標之後對其進行檢查。若上部頁經程式化,則一切情況良好且鎖存器中之所感應之資料(頁(n))準備好在下一循環中被切出。For the case of reading the upper part meta logical page, S 1 (n) will step through the read A and read C for the demarcation threshold voltages D A and D C , respectively. The data sensed by the upper part of the metapage will be stored in DL2 and the DL0 data latch will be used to cut out the data (see Figures 13 and 14). At t2, the sensed data of DL2 is transferred to DL0. Also, the LM flag is checked after the end of the switching of the (n-1)th page. If the upper page is programmed, everything is fine and the data sensed in the latch (page (n)) is ready to be cut out in the next cycle.

在讀取上部位元邏輯頁時,若發現LM旗標未經設定,則其將指示上部頁未經程式化。自S1 (n)感應之資料將被重設為"1"以與LM編碼適當地一致。感應之資料接著準備好輸出。接著將預取出第一位元組且隨後為下一循環開始時之整頁切出。When reading the upper part meta logical page, if it is found that the LM flag is not set, it will indicate that the upper page is not stylized. The data sensed from S 1 (n) will be reset to "1" to properly match the LM code. The sensed data is then ready for output. The first byte will then be prefetched and then cut out for the entire page at the beginning of the next cycle.

圖38B為關於以LM代碼進行之快取讀取在尚未對上部位元邏輯頁進行程式化時讀取下部位元邏輯頁之特殊情形中的示意時序圖。又,在t0處開始第一感應S1 (n)且在t1處讀取LM旗標。輸出LM旗標用於t2處之檢查。若發現LM旗標未經設定,則S1 (n)在讀取B處不正確地讀取了下部位元頁。第二感應S2 (n)將開始於t3以於讀取A處執行。然而,此額外感應(結束於t4)無法隱藏於第(n-1)頁之切換(例如,T(n-1))之時間後,因為在第二感應之前檢查來自S1 (n)之旗標將需要存取I/O匯流排且將需要等待直至T(n-1)切換完成。Figure 38B is a schematic timing diagram in a special case of a cache read by LM code in which a lower-part logical page is read when the upper-part meta-logic page has not been programmed. Again, the first sense S 1 (n) is started at t0 and the LM flag is read at t1. The output LM flag is used for the check at t2. If it is found that the LM flag is not set, S 1 (n) incorrectly reads the lower part meta page at the reading B. The second induction S 2 (n) will begin at t3 for execution at read A. However, this extra induction (ending at t4) cannot be hidden after the time of the (n-1)th page switch (eg, T(n-1)) because the check is from S 1 (n) before the second sense The flag will need to access the I/O bus and will need to wait until the T(n-1) switch is complete.

以所有位元感應而進行之快取讀取演算法Cache read algorithm with all bit sensing

在替代機制中,當在一字線上待讀取之頁為具有同一實體頁上之多個邏輯頁之多個位元的頁時,可在一感應作業中一同感應所有多個位元以節省功率。In an alternative mechanism, when a page to be read on a word line is a page having a plurality of bits of a plurality of logical pages on the same physical page, all of the plurality of bits can be sensed together in one sensing operation to save power.

圖39說明對於2位元記憶體以所有位元感應而進行之快取讀取的示意時序圖。在2位元之情形下,在同一作業中感應表示四個記憶體狀態之兩個位元。此將需要在讀取A、讀取B及讀取C感應以分辨四個狀態。在此情形下,感應將在每隔一個之循環中發生。舉例而言,感應僅在奇數循環上發生且在偶數循環上將被跳過。將在每一循環順序地切出在一感應中獲得之兩個邏輯頁。Figure 39 illustrates a schematic timing diagram for a cache read of a 2-bit memory with all bit sensing. In the case of a 2-bit, two bits representing the state of the four memories are sensed in the same job. This will require reading A, reading B, and reading C sensing to resolve the four states. In this case, the induction will occur in every other cycle. For example, sensing occurs only on odd cycles and will be skipped on even cycles. Two logical pages obtained in one induction will be sequentially cut out in each cycle.

在存在八個狀態(例如"U"、"A"、"B"、"C"、"D"、"E"、"F"及"G")之3位元情形下,所有位元感應將涉及在讀取A、讀取B、讀取C、讀取D、讀取E、讀取F及讀取G處之感應以分辨八個狀態。In the case of a 3-bit case where there are eight states (for example, "U", "A", "B", "C", "D", "E", "F", and "G"), all bit senses The inductions at Read A, Read B, Read C, Read D, Read E, Read F, and Read G will be involved to resolve the eight states.

一般而言,少於所有位元之任何多位元感應將用來減少讀取頁之所有位元所需感應之次數且將有助於節省功率。結合圖30而描述之記憶體作業佇列及佇列管理器可用以藉由合併兩個或兩個以上之二進位頁感應而管理所有位元感應作業。所有位元感應機制可應用於具有LM代碼之記憶體且亦可應用於具有LA校正之記憶體(其將在下一章節中得以描述)。In general, any multi-bit sensing of less than all of the bits will be used to reduce the number of times required to read all of the bits of the page and will help save power. The memory job queue and queue manager described in connection with FIG. 30 can be used to manage all of the bit sensing operations by combining two or more binary page sensing. All bit sensing mechanisms can be applied to memory with LM code and can also be applied to memory with LA correction (which will be described in the next section).

關於LM代碼連同LA校正之快取讀取演算法About the LM code together with the LA correction cache read algorithm

關於鄰近字線上之記憶體單元之間的擾動,其可藉由使用較佳程式化機制而在程式化期間得以減輕。此將有效地將擾動減半。亦可藉由使用較佳LA讀取機制而在讀取期間校正剩餘之一半。Regarding the disturbance between memory cells on adjacent word lines, it can be mitigated during stylization by using a better stylized mechanism. This will effectively halve the disturbance. One of the remaining half can also be corrected during reading by using a preferred LA reading mechanism.

較佳程式化機制將以最佳序列而程式化與字線相關聯之頁。舉例而言,在每一實體頁保持一頁二進位資料之二進位記憶體之情形下,較佳地沿始終如一之方向(諸如自底部至頂部)而順序地對頁進行程式化。以此方式,當程式化特定頁時,其下側之頁已經程式化。無論其對於當前頁有何擾動效應,在鑒於此等擾動而對當前頁進行程式化驗證時對其加以解決。本質上,程式化頁之序列應允許正進行程式化之當前頁在其經程式化之後經歷圍繞其環境之最小改變。因此,每一經程式化之頁僅受其上側之頁之擾動且字線與字線之間的Yupin效應藉由此程式化序列而有效地減半。A better stylization mechanism will program the pages associated with the word lines in an optimal sequence. For example, in the case where each physical page holds a binary memory of a page of binary data, the pages are preferably sequentially sequenced in a consistent direction, such as from bottom to top. In this way, when a particular page is stylized, the page on its underside is already stylized. Regardless of its perturbation effect on the current page, it is addressed when the current page is programmatically verified in view of such perturbations. Essentially, the sequence of stylized pages should allow the current page being programmed to undergo minimal changes around its environment after it has been programmed. Thus, each stylized page is only disturbed by the page on its upper side and the Yupin effect between the word line and the word line is effectively halved by this stylized sequence.

在記憶體單元之每一實體頁為多狀態的記憶體之情形下,序列較不直接。舉例而言,在2位元記憶體中,可將與一字線相關聯之每一實體頁視作具有2位元資料之單一頁或兩個單獨之邏輯頁(各具有1位元資料之下部及上部位元)。因此可在一次通過中關於兩個位元對實體頁進行程式化,或在兩次單獨之通過中,首先關於下部位元頁且接著稍後關於上部位元頁而對實體頁進行程式化。當將在兩次單獨之通過中對每一實體頁進行程式化時,經修改之最佳序列為可能的。In the case where each physical page of the memory unit is a multi-state memory, the sequence is less straightforward. For example, in 2-bit memory, each physical page associated with a word line can be treated as a single page with 2 bit data or two separate logical pages (each having 1 bit of data) Lower and upper parts). Thus, the physical page can be stylized with respect to two bits in one pass, or in two separate passes, first with respect to the lower part meta page and then later with respect to the upper part meta page. The modified optimal sequence is possible when each physical page is to be programmed in two separate passes.

圖40說明一記憶體之實例,其具有2位元記憶體單元且使其頁以最佳序列程式化從而最小化鄰近字線上之記憶體單元之間的Yupin效應。為了方便,表示法為如下:實體頁P0、P1、P2......分別常駐於字線W0、W1、W2......上。對於2位元記憶體而言,每一實體頁具有與其相關聯之兩個邏輯頁,即各具有二進位資料之下部位元及上部位元邏輯頁。一般而言,藉由LP(字線.邏輯頁)而給出特定邏輯頁。舉例而言,將W0上之P0之下部位元及上部位元頁分別標為LP(0.0)及LP(0.1),且W2上之相應者將為LP(2.0)及LP(2.1)。Figure 40 illustrates an example of a memory having a 2-bit memory cell and having its pages programmed in an optimal sequence to minimize the Yupin effect between memory cells on adjacent word lines. For convenience, the representation is as follows: the physical pages P0, P1, P2, ... are respectively resident on the word lines W0, W1, W2, .... For 2-bit memory, each physical page has two logical pages associated with it, that is, each has a lower-order material lower part and an upper-part meta-logic page. In general, a specific logical page is given by LP (word line. logical page). For example, the part of the lower part of P0 on W0 and the upper part of the page are marked as LP (0.0) and LP (0.1), respectively, and the corresponding ones on W2 will be LP (2.0) and LP (2.1).

本質上,邏輯頁之程式化將遵循序列n以使得正進行程式化之當前頁在其經程式化之後將經歷圍繞其環境之最小改變。在此情形下,再一次在自底部至頂部之一始終如一之方向上漸增地移動將有助於消除來自一側之擾動。此外,因為每一實體頁可能具有兩次程式化通過,所以在程式化對於實體頁上移時,當前上部位元頁在已對其鄰近的下部位元頁進行程式化之後經程式化以使得該等下部位元頁之擾動效應將在對當前上部位元頁進行程式化時得以解決將為較佳的。因此,若程式化自LP(0.0)開始,則序列將如以將產生LP(0.0)、LP(1.0)、LP(0.1)、LP(2.0)、LP(1.1)、LP(3.0)、LP(2.1)......之頁程式化次序0、1、2......n而做記號。Essentially, the stylization of a logical page will follow the sequence n such that the current page being programmed will undergo minimal changes around its environment after it has been programmed. In this case, once again moving incrementally from one of the bottom to the top will help to eliminate disturbances from one side. In addition, because each physical page may have two stylized passes, when the stylization moves up on the physical page, the current upper part meta page is programmed after the lower part meta page adjacent to it has been programmed to make It would be preferable for the perturbation effect of the lower part metapages to be resolved when the current upper part metapage is programmed. Therefore, if the programming starts from LP (0.0), the sequence will produce LP (0.0), LP (1.0), LP (0.1), LP (2.0), LP (1.1), LP (3.0), LP. (2.1) ... page stylized order 0, 1, 2...n and marked.

關於LM代碼連同LA校正之快取讀取演算法About the LM code together with the LA correction cache read algorithm

根據本發明之一態樣,實施用於快取讀取資料之機制以使得即使對於校正視來自相鄰實體頁或字線之資料而定之讀取作業,資料鎖存器及I/O匯流排亦有效地用以在當前頁正自記憶體核心而被感應之同時切出先前讀取頁。詳言之,較佳讀取作業為"先行"("LA")讀取且對於記憶體狀態之較佳編碼為"中下"("LM")代碼。在必須以對鄰近字線上之資料之預先必要的讀取而居先於對當前字線上之當前頁之讀取時,該預先必要的讀取連同任何I/O存取在讀取先前頁之循環中經優先完成以使得可在先前讀取之頁忙於I/O存取之同時執行當前讀取。In accordance with an aspect of the present invention, a mechanism for caching read data is implemented such that data read latches and I/O busses are read even for correcting data from adjacent physical pages or word lines. It is also effective to cut out previously read pages while the current page is being sensed from the core of the memory. In particular, the preferred read job is "lead" ("LA") read and the preferred encoding for the memory state is the "lower middle" ("LM") code. The pre-requisite read along with any I/O access is read on the previous page when the previous page of the current word line must be read prior to the pre-requisite reading of the data on the adjacent word line. The loop is prioritized so that the current read can be performed while the previously read page is busy with I/O access.

於2005年4月5日申請的題為"Read Operations for Non-Volatile Storage that Includes Compensation for Coupling"之美國專利申請案第11/099,049號(其全部揭示內容以引用的方式併入本文中)中已揭示LA讀取機制。伴隨LA("先行")校正之讀取基本上檢查程式化至鄰近字線上之單元中之記憶體狀態且校正其對當前字線上正被讀取之記憶體單元所造成之任何擾動效應。若頁已根據上文描述之較佳程式化機制而程式化,則鄰近字線將來自緊於當前字線上方之字線。LA校正機制將需要鄰近字線上之資料先於當前頁而經讀取。U.S. Patent Application Serial No. 11/099,049, the entire disclosure of which is incorporated herein by reference in its entirety in its entirety in The LA reading mechanism has been revealed. A read with LA ("preemptive") correction essentially checks the state of the memory programmed into the cells on the adjacent word line and corrects for any disturbing effects caused by the memory cells being read on the current word line. If the page has been programmed according to the preferred stylization mechanism described above, the adjacent word line will come from the word line immediately above the current word line. The LA correction mechanism will require the data on the adjacent word line to be read prior to the current page.

舉例而言,參看圖40,若待讀取之當前頁(n)處於WLm(例如,WL1)上,則如將由SLA (n)所表示之LA讀取將首先讀取下一字線WLm+1(例如,WL2)且將資料結果儲存於一資料鎖存器中。接著,將接著鑒於SLA (n)結果而感應當前頁且此將由S1 '(n)表示。For example, referring to FIG. 40, if the current page (n) to be read is on WLm (eg, WL1), then the LA read as indicated by S LA (n) will first read the next word line WLm+1. (eg, WL2) and store the data results in a data latch. Next, the current page will be sensed in view of the S LA (n) result and this will be represented by S 1 '(n).

如早先結合圖40所描述,在具有較佳程式化序列之LM代碼中,下部頁(例如,LP(1.0))將經程式化至DB 或接近於DB (中間狀態)。將僅在對WLm+1下部頁(例如,LP(2.0))程式化之後程式化上部頁(例如,LP(1.1))。接著將完全消除下部頁之WL與WL之間的Yupin效應。因此,將僅對"A"及"C"狀態而不對"U"或"B"狀態執行資料相關之校正。As described earlier in connection with Figure 40, in an LM code with a better stylized sequence, the lower page (e.g., LP (1.0)) will be programmed to D B or close to D B (intermediate state). The upper page (eg, LP(1.1)) will be programmed only after stylizing the lower page of WLm+1 (eg, LP (2.0)). The Yupin effect between WL and WL of the lower page will then be completely eliminated. Therefore, data-related corrections will be performed only for the "A" and "C" states and not for the "U" or "B" states.

在LA讀取之較佳實施中,使用鎖存器以指示LA讀取是否發現"A"或"C"狀態或者"U"或"B"狀態。在前一情形下需要校正且在後一情形下不需要校正。將藉由對感應參數之合適調整(諸如提昇感應期間之字線電壓)而相應地校正當前讀取S1 (n)中之相應單元。此藉由在調整之情況中感應一次且在未調整之情況下感應另一次而對整個當前頁進行。接著將根據鎖存器是否指示校正而自此等兩次感應選擇頁之每一單元之資料。In a preferred implementation of the LA read, a latch is used to indicate whether the LA read finds an "A" or "C" state or a "U" or "B" state. Correction is required in the former case and no correction is required in the latter case. The corresponding cell in the current read S 1 (n) will be corrected accordingly by appropriate adjustment of the sensing parameters, such as boosting the word line voltage during sensing. This is done for the entire current page by sensing once in the case of adjustment and sensing another time without adjustment. The data for each cell of the page is then sensed from this two times depending on whether the latch indicates a correction.

以LM代碼進行之讀取將需要在最終定下讀取結果之前檢查LM旗標(藉由第二次通過讀取或藉由重設讀取資料)。LA校正需在讀取當前字線之前首先進行下一字線讀取。因此,需藉由狀態機而檢查來自下一字線讀取之LM旗標及來自當前字線之LM旗標。需在I/O匯流排不忙於切換讀取資料時經由I/O匯流排將此等兩個LM旗標輸出至狀態機。Reading with the LM code will require checking the LM flag (by reading the second time or by resetting the read data) before finalizing the read result. The LA correction needs to first read the next word line before reading the current word line. Therefore, the LM flag read from the next word line and the LM flag from the current word line are checked by the state machine. These two LM flags need to be output to the state machine via the I/O bus when the I/O bus is not busy switching data.

圖41說明根據圖37所示之習知機制的對於LM代碼連同LA校正之讀取快取之實施。基本上,習知機制係關於將對當前頁之感應藏於所感應之先前頁之資料切出時間內。然而,在此情形下,必須以WLm+1上之額外先行讀取SLA (n)居先於WLm上之當前頁感應S1 '(n)。必須在確定所感應之資料之前經由I/O匯流排輸出此等感應中之每一者之LM旗標。鑒於來自SLA (n)之資料而執行當前頁感應S1 '(n)以產生當前頁之經校正之資料。應瞭解如圖38B所示,若n為下部位元頁且上部位元頁尚未經程式化,則S1 '(n)之後可存在額外S2 '(n)。Figure 41 illustrates the implementation of a read cache for LM code along with LA correction in accordance with the conventional mechanism illustrated in Figure 37. Basically, the conventional mechanism is about cutting out the data of the current page from the previous page that was sensed. However, in this case, S LA (n) must be read with an additional look-ahead on WLm+1 prior to the current page sense S 1 '(n) on WLm. The LM flag of each of these senses must be output via the I/O bus prior to determining the sensed data. The current page sensing S 1 '(n) is performed in view of the data from S LA (n) to produce corrected data for the current page. 'May be additional S 2 after (n)' (n) shown in FIG. 38B should be understood that, if n is a lower page bit and the upper portion of the page have not been stylized element, the S 1.

在開始於t0之下一循環中,接著如T(n)所指示而切出頁n之經校正的感應之資料。同時,當前感應現已以必須由SLA (n+1)居先之S1 '(n+1)而移動至下一頁。然而,來自此等感應之LM旗標之輸出必須等待直至對頁n之切換T(n)完成。此外,僅可在SLA (n+1)之結果確定之後執行S1 (n+1)。因此,S1 '(n+1)僅可在資料切換週期之外執行且因此無法藏於其之後。此在未充分利用鎖存器及I/O匯流排時添加額外感應時間,且浪費之時間對於每一後續循環重複。此實施在使用LA校正時使使用者之讀取效能降級。In a cycle beginning at t0, the corrected sensed data for page n is then cropped as indicated by T(n). At the same time, the induced current must now be made to S LA (n + 1) of precedence S 1 '(n + 1) moves to the next page. However, the output from these sensed LM flags must wait until the switch T(n) for page n is complete. Further, S 1 (n+1) can be performed only after the result of S LA (n+1) is determined. Therefore, S 1 '(n+1) can only be performed outside the data switching period and thus cannot be hidden behind it. This adds additional sensing time when the latches and I/O busses are not fully utilized, and the wasted time is repeated for each subsequent cycle. This implementation degrades the user's read performance when using LA correction.

以LM代碼連同LA校正進行之快取讀取之較佳實施為以所有感應將藏於資料切換內之方式而對下一字線感應及當前字線感應進行管線式作業。下一字線感應總在當前字線感應之前執行。在每一組資料切換內,將執行當前字線感應且隨後為下下一字線感應。當已結束切出該組資料且I/O匯流排可用時,將首先取出下下一字線LM旗標且對其進行檢查。若LM旗標處於指示上部頁未經程式化之狀態中,則將下下一字線的感應之資料重設為"1"(由於無校正)。隨後將檢查當前字線LM旗標。視當前字線LM旗標而定,保持所感應之資料或需執行另一感應(在下部頁讀取之情形下)或者將資料重設為均為"1"(在上部頁讀取之情形下)。對於具有2位元記憶體單元之記憶體,可藉由3個資料鎖存器而管理所有此等感應及資料切出。A preferred implementation of the cache read with the LM code along with the LA correction is to pipeline the next word line sense and the current word line sense in such a way that all senses are hidden within the data switch. The next word line sense is always performed before the current word line sense. Within each set of data switches, the current word line sensing is performed and then sensed for the next next word line. When the group data has been cut and the I/O bus is available, the next word line LM flag will be taken out first and checked. If the LM flag is in a state indicating that the upper page is not programmed, reset the sensed data of the next lower word line to "1" (due to no correction). The current wordline LM flag will then be checked. Depending on the current word line LM flag, keep the sensed data or perform another induction (in the case of reading the lower page) or reset the data to "1" (in the case of reading on the upper page) under). For a memory with a 2-bit memory unit, all of these sensing and data cuts can be managed by three data latches.

圖42說明以LM代碼連同LA校正進行之改良讀取快取機制。自-t5至t0之第一循環為讀取WLm上之當前頁(n)之時間且不同於循環之剩餘部分。如前所述,LA校正需要在先之讀取SLA (n),其中讀取A、讀取B及讀取C將感應WLm+1上之單元狀態。來自此讀取之LM旗標FLA (n)將於-t4輸出且受到檢查。若旗標指示上部頁在WLm+1上未受到程式化,則感應之資料將被重設為均為"1"以指示將不存在校正。若旗標指示上部頁已經程式化,則指示校正與否之經鎖存之資料將保持為原狀。在-t3處,將根據早先描述之LM代碼及LA校正機制以S1'(n)及(可能地)S2 '(n)感應WLm上之當前頁。與圖41所說明之機制形成對比,亦對於下一頁(n+1)執行優先先行讀取。因此,在時間-t2處執行SLA (n+1)且在-t1處輸出並檢查其LM旗標。Figure 42 illustrates an improved read cache mechanism with LM code along with LA correction. The first cycle from -t5 to t0 is the time to read the current page (n) on WLm and is different from the remainder of the cycle. As previously mentioned, the LA correction requires a prior read of S LA (n), where reading A, reading B, and reading C will sense the state of the cell on WLm+1. The LM flag F LA (n) from this read will be output at -t4 and will be checked. If the flag indicates that the upper page is not stylized on WLm+1, the sensed data will be reset to "1" to indicate that there will be no correction. If the flag indicates that the upper page has been programmed, the latched data indicating correction or not will remain intact. In -t3, the LM codes in accordance with the LA and the correction mechanism described earlier to S1 '(n) and (possibly) S 2' on this page of the (n) inductive WLm. In contrast to the mechanism illustrated in FIG. 41, priority read is also performed for the next page (n+1). Therefore, S LA (n+1) is performed at time -t2 and its LM flag is output and checked at -t1.

在第一循環之後,在t0處的下一循環之開始,將如由T(n)所指示而切出先前自S1 '(n)感應之資料(現經LA校正)。頁位址將首先遞增至常駐於由圖38所指示之次序給出之字線上的(n+1)。因此,在時間t0處,伴隨著T(n)之開始,對第(n+1)頁之感應S1 '(n+1)可立刻開始,因為其預先必要之先行SLA (n+1)已在先前循環中完成。在t1處的S1 '(n+1)之結尾處,將取出並檢查LM旗標F(n+1)且任何額外動作將視LM旗標而跟隨。經校正之頁(n+1)之資料接著將準備好在下一循環中切換。同時,雖然仍在切出頁(n),但可預先且在T(n)之切換週期內執行對下一頁之先行感應SLA (n+2)。After the first cycle, at the beginning of the next cycle at t0, the data previously sensed from S 1 '(n) (now LA corrected) will be cut as indicated by T(n). The page address will first be incremented to (n+1) resident on the word line given by the order indicated by Figure 38. Therefore, at time t0, along with the beginning of T(n), the induction of the (n+1)th page S 1 '(n+1) can start immediately because its pre-necessary first line S LA (n+1) is already in the previous cycle. carry out. At the end of S 1 '(n+1) at t1, the LM flag F(n+1) will be fetched and checked and any additional actions will follow the LM flag. The corrected page (n+1) data will then be ready to switch in the next cycle. Meanwhile, although the page (n) is still being cut out, the preceding sensing S LA (n+2) for the next page can be performed in advance and during the switching period of T(n).

對頁(n)之切換T(n)一完成,下一循環即開始且T(n+1)以對經LA校正之頁(n+1)之資料的切出而跟隨。對於頁(n+1)之循環以與對於頁(n)之循環相似之方式而繼續。重要特徵在於在早先循環中優先執行對於給定頁之先行讀取。As soon as the switch T(n) of page (n) is completed, the next cycle begins and T(n+1) follows the cut-out of the data of the LA-corrected page (n+1). The loop for page (n+1) continues in a similar manner to the loop for page (n). An important feature is that the first line read for a given page is prioritized in the previous loop.

圖43為說明改良讀取快取之示意流程圖:步驟810:在每一讀取循環(其中將自記憶體感應來自其一系列之頁)中,在當前循環中輸出在上一循環中感應之先前頁。Figure 43 is a schematic flow diagram illustrating an improved read cache: Step 810: In each read cycle (where the self-memory sense is from a series of pages thereof), the output is sensed in the previous loop in the current loop. Previous page.

步驟830:在該輸出先前頁期間感應當前頁,該感應當前頁執行於當前字線上且需要在鄰近字線處之預先必要之感應以校正來自於鄰近字線上之資料的任何擾動效應。Step 830: Sensing the current page during the output of the previous page, the sensing the current page is performed on the current word line and requires pre-needed sensing at the adjacent word line to correct any perturbation effects from the material on the adjacent word line.

步驟850:在早於當前循環之循環中優先地執行與當前頁相關之鄰近字線之該預先必要的感應。Step 850: The pre-necessary sensing of the adjacent word lines associated with the current page is preferentially performed in a loop earlier than the current loop.

圖44為以進一步之清晰度說明圖41之步驟850的示意流程圖:步驟852:輸出作為來自該預先必要之感應的資料之部分而獲得之第一旗標。Figure 44 is a schematic flow diagram illustrating step 850 of Figure 41 with further clarity: Step 852: Output a first flag obtained as part of the data from the pre-needed sensing.

步驟854:根據輸出之第一旗標而調整來自該預先必要之感應之資料。Step 854: Adjust the data from the pre-needed sensing according to the first flag of the output.

步驟856:鎖存資料以指示是否需要對於跟隨之對當前頁之該感應而進行校正。Step 856: Latch the data to indicate if corrections to the current page are required to be corrected.

圖45為以進一步之清晰度說明圖41之步驟830的示意流程圖:步驟832:以或不以來自預先必要之感應之校正而執行對當前頁之該感應。45 is a schematic flow diagram illustrating step 830 of FIG. 41 with further clarity: Step 832: Performing the sensing of the current page with or without correction from a pre-needed sensing.

步驟834:輸出作為來自該當前感應之資料之部分而獲得之第二旗標。Step 834: Output a second flag obtained as part of the data from the current sensing.

步驟836:回應於第二旗標,藉由將資料保持為不改變之狀態或將資料調整一預定值或者在另一感應條件集合下重複對當前頁之該感應而獲得新資料來修訂來自該當前感應之資料。Step 836: Respond to the second flag, by revising the new data by keeping the data in a state of no change or adjusting the data to a predetermined value or repeating the sensing of the current page under another set of sensing conditions. Current sensing data.

步驟838:鎖存根據來自預先必要之感應之資料是否指示存在校正而經校正或未經校正的修訂資料。Step 838: Latch revision data that is corrected or uncorrected based on whether the data from the pre-required sensing indicates that there is a correction.

已使用2位元LM代碼而描述以上之演算法。演算法為對於3個或3個以上之位元同樣地可應用之LM代碼。The above algorithm has been described using a 2-bit LM code. The algorithm is an LM code that is equally applicable to three or more bits.

雖然已關於特定實施例而描述本發明之各種態樣,但應瞭解,本發明有權保護所附申請專利範圍之全部範疇。While the invention has been described with respect to the specific embodiments thereof, it should be understood that

"1"...記憶體狀態/邏輯狀態"1". . . Memory state/logic state

"5"...記憶體狀態"5". . . Memory state

6...主機6. . . Host

8...記憶體控制器8. . . Memory controller

10...記憶體單元10. . . Memory unit

12...分離通道12. . . Separation channel

14...源極14. . . Source

16...汲極16. . . Bungee

20...浮動閘極20. . . Floating gate

20'...浮動閘極20'. . . Floating gate

30...控制閘極30. . . Control gate

30'...控制閘極30'. . . Control gate

34...位元線34. . . Bit line

36...位元線//操縱線36. . . Bit line // steering line

40...選擇閘極40. . . Select gate

42...字線42. . . Word line

50...NAND單元50. . . NAND unit

54...源極端子54. . . Source terminal

56...汲極端子56. . .汲 extreme

100...記憶體陣列100. . . Memory array

130...列解碼器130. . . Column decoder

160...行解碼器160. . . Row decoder

170...讀取/寫入電路170. . . Read/write circuit

180...讀取/寫入堆疊180. . . Read/write stack

190...讀取/寫入模組190. . . Read/write module

212...感應放大器之堆疊212. . . Stacking of sense amplifiers

212-1...感應放大器212-1. . . Sense amplifier

212-k...感應放大器212-k. . . Sense amplifier

214...感應放大器資料鎖存器DLS214. . . Amplifier Amplifier Data Latch DLS

214-1...SA鎖存器214-1. . . SA latch

231...I/O匯流排//資料I/O線231. . . I/O bus / / data I / O line

300...記憶體陣列300. . . Memory array

301...記憶體晶片301. . . Memory chip

310...控制電路/記憶體控制器/晶片上主機介面310. . . Control circuit / memory controller / on-chip host interface

310'...晶片上控制電路310'. . . On-wafer control circuit

311...線311. . . line

312...狀態機312. . . state machine

312'...有限狀態機312'. . . Finite State Machine

314...晶片上位址解碼器314. . . On-chip address decoder

316...功率控制模組316. . . Power control module

322...緩衝器322. . . buffer

324...程式暫存器324. . . Program register

330...列解碼器/佇列330. . . Column decoder/array

330A...列解碼器330A. . . Column decoder

330B...列解碼器330B. . . Column decoder

332...記憶體作業佇列管理器/記憶體作業管理器332. . . Memory Job Queue Manager / Memory Job Manager

350...區塊多工器350. . . Block multiplexer

350A...區塊多工器350A. . . Block multiplexer

350B...區塊多工器350B. . . Block multiplexer

360...行解碼器360. . . Row decoder

360A...行解碼器360A. . . Row decoder

360B...行解碼器360B. . . Row decoder

370...讀取/寫入電路370. . . Read/write circuit

370A...讀取/寫入電路370A. . . Read/write circuit

370B...讀取/寫入電路370B. . . Read/write circuit

400...讀取/寫入堆疊400. . . Read/write stack

400-1......400-r...讀取/寫入堆疊400-1...400-r. . . Read/write stack

410...堆疊匯流排控制器410. . . Stack bus controller

411...控制線411. . . Control line

421...堆疊匯流排421. . . Stack bus

422...SABus/SBUS/線422. . . SABus/SBUS/line

423...DBus/線423. . . DBus/line

430...資料鎖存器之堆疊430. . . Data latch stack

430-1...資料鎖存器/資料鎖存器之集合430-1. . . Data latch/data latch set

430-k...資料鎖存器430-k. . . Data latch

431...互連堆疊匯流排431. . . Interconnect stack bus

434-0...資料鎖存器DL0434-0. . . Data latch DL0

434-1......434-n...資料鎖存器434-1...434-n. . . Data latch

435...線435. . . line

440...I/O模組440. . . I/O module

500...通用處理器500. . . General purpose processor

501...轉移閘極501. . . Transfer gate

502...轉移閘極502. . . Transfer gate

505...處理器匯流排PBUS505. . . Processor bus PBUS

507...輸出507. . . Output

509...旗標匯流排509. . . Flag bus

510...輸入邏輯510. . . Input logic

520...處理器鎖存器PLatch//設定/重設鎖存器PLatch//輸入邏輯520. . . Processor latch PLatch / / set / reset latch PLatch / / input logic

522...轉移閘極522. . . Transfer gate

523...輸出523. . . Output

524...p型電晶體524. . . P-type transistor

525...p型電晶體525. . . P-type transistor

526...n型電晶體526. . . N-type transistor

527...n型電晶體527. . . N-type transistor

530...輸出邏輯530. . . Output logic

531...p型電晶體531. . . P-type transistor

532...p型電晶體532. . . P-type transistor

533...p型電晶體533. . . P-type transistor

534...p型電晶體534. . . P-type transistor

535...n型電晶體535. . . N-type transistor

536...n型電晶體536. . . N-type transistor

537...n型電晶體537. . . N-type transistor

538...n型電晶體538. . . N-type transistor

550...n型電晶體550. . . N-type transistor

"A"...狀態"A". . . status

B...斷續豎直線B. . . Intermittent vertical line

"B"...狀態"B". . . status

BSI...輸出BSI. . . Output

"C"...狀態"C". . . status

DA ...臨限電壓D A . . . Threshold voltage

DAL ...界線D AL . . . Boundary

DB ...臨限電壓D B . . . Threshold voltage

DC ...臨限電壓D C . . . Threshold voltage

DL0......DLn...鎖存器DL0...DLn. . . Latches

DTN...補充信號DTN. . . Supplementary signal

DTP...補充信號DTP. . . Supplementary signal

DVA ...界線/臨限位準DV A . . . Boundary/premise level

DVB ...界線DV B. . . Boundary

DVBL ...界線DV BL . . . Boundary

DVC ...界線DV C . . . Boundary

ID ...源極-汲極電流I D . . . Source-drain current

IREF ...參考電流I REF . . . Reference current

M1、M2......Mn...記憶體電晶體M1, M2...Mn. . . Memory transistor

MEM OP0...記憶體作業MEM OP0. . . Memory work

MEM OP1...記憶體作業MEM OP1. . . Memory work

MTCH...補充輸出信號/資料MTCH. . . Supplementary output signal / data

MTCH ...補充輸出信號/資料MTCH * . . . Supplementary output signal / data

n...序列n. . . sequence

NDIR...控制信號NDIR. . . control signal

NINV...控制信號NINV. . . control signal

ONE...信號ONE. . . signal

ONEB<0>...信號ONEB<0>. . . signal

ONEB<1>...信號ONEB<1>. . . signal

PBUS...信號PBUS. . . signal

PDIR...控制信號PDIR. . . control signal

PINV...控制信號PINV. . . control signal

Q1-Q4...電荷Q1-Q4. . . Electric charge

S1 (n)...第一感應S 1 (n). . . First induction

S1...源極選擇電晶體S1. . . Source selective transistor

S2 (n)...第二感應S 2 (n). . . Second induction

S2...汲極選擇電晶體S2. . . Bungee selection transistor

SAN...補充信號SAN. . . Supplementary signal

SAP...補充信號SAP. . . Supplementary signal

t0 ...時間t 0 . . . time

t1 ...時間t 1 . . . time

t2 ...時間t 2 . . . time

t3 ...時間t 3 . . . time

t4 ...時間t 4 . . . time

t5 ...時間t 5 . . . time

t6 ...時間t 6 . . . time

t7 ...時間t 7 . . . time

t8 ...時間t 8 . . . time

t9 ...時間t 9 . . . time

t10 ...時間t 10 . . . time

t11 ...時間t 11 . . . time

t12 ...時間t 12 . . . time

-t1...時間-t1. . . time

-t2...時間-t2. . . time

-t3...時間-t3. . . time

-t4...時間-t4. . . time

-t5...時間-t5. . . time

T1...電晶體T1. . . Transistor

T2...電晶體T2. . . Transistor

"U"...記憶體狀態"U". . . Memory state

VCG ...控制閘極電壓V CG . . . Control gate voltage

VPGM_L...值VPGM_L. . . value

VPGM_U...開始值VPGM_U. . . Starting value

VT ...臨限電壓V T. . . Threshold voltage

圖1A至圖1E示意地說明非揮發性記憶體單元之不同實例。Figures 1A through 1E schematically illustrate different examples of non-volatile memory cells.

圖2說明記憶體單元之NOR陣列之一實例。Figure 2 illustrates an example of a NOR array of memory cells.

圖3說明諸如圖1D所示的記憶體單元之NAND陣列之一實例。Figure 3 illustrates an example of a NAND array such as the memory cell shown in Figure 1D.

圖4說明對於浮動閘極於任一時間可儲存之四個不同電荷Q1-Q4的源極-汲極電流與控制閘極電壓之間的關係。Figure 4 illustrates the relationship between the source-drain current and the control gate voltage for four different charges Q1-Q4 that the floating gate can store at any one time.

圖5示意地說明藉由讀取/寫入電路經由列及行解碼器可存取之記憶體陣列之典型配置。Figure 5 schematically illustrates a typical configuration of a memory array accessible by a read/write circuit via a column and row decoder.

圖6A為個別讀取/寫入模組之示意方塊圖。Figure 6A is a schematic block diagram of an individual read/write module.

圖6B展示由讀取/寫入模組之堆疊按照慣例實施之圖5之讀取/寫入堆疊。Figure 6B shows the read/write stack of Figure 5, which is conventionally implemented by stacking of read/write modules.

圖7A示意地說明具有一組經分割之讀取/寫入堆疊之緊密記憶體裝置,其中實施本發明之改良處理器。Figure 7A schematically illustrates a compact memory device having a set of segmented read/write stacks in which the improved processor of the present invention is implemented.

圖7B說明圖7A所示之緊密記憶體裝置之較佳配置。Figure 7B illustrates a preferred configuration of the compact memory device shown in Figure 7A.

圖8示意地說明圖7A所示之讀取/寫入堆疊中之基本組件的一般配置。Figure 8 schematically illustrates the general configuration of the basic components in the read/write stack shown in Figure 7A.

圖9說明圖7A及圖7B所示之讀取/寫入電路中之讀取/寫入堆疊的一較佳配置。Figure 9 illustrates a preferred configuration of a read/write stack in the read/write circuit of Figures 7A and 7B.

圖10說明圖9所示之通用處理器之改良實施例。Figure 10 illustrates a modified embodiment of the general purpose processor illustrated in Figure 9.

圖11A說明圖10所示之通用處理器之輸入邏輯的較佳實施例。Figure 11A illustrates a preferred embodiment of the input logic of the general purpose processor shown in Figure 10.

圖11B說明圖11A之輸入邏輯之真值表。Figure 11B illustrates a truth table for the input logic of Figure 11A.

圖12A說明圖10所示之通用處理器之輸出邏輯的較佳實施例。Figure 12A illustrates a preferred embodiment of the output logic of the general purpose processor illustrated in Figure 10.

圖12B說明圖12A之輸出邏輯之真值表。Figure 12B illustrates a truth table for the output logic of Figure 12A.

圖13為圖10之簡化版本,其展示在本發明之二位元實施例中與當前論述相關之一些特定元件。Figure 13 is a simplified version of Figure 10 showing some of the specific elements associated with the present discussion in the two-bit embodiment of the present invention.

圖14關於與圖13相同之元件指示對於上部頁程式化之鎖存器分配,在其中讀入下部頁資料。Figure 14 is the same as Figure 13 for the component assignment for the upper page stylized latch in which the lower page data is read.

圖15說明以單頁模式進行之快取程式化之態樣。Figure 15 illustrates the aspect of the cached stylization in single page mode.

圖16展示可用於下部頁至全序列轉換中之程式化波形。Figure 16 shows the stylized waveforms that can be used in the lower page to full sequence conversion.

圖17說明在具有全序列轉換之快取程式作業中之相對時序。Figure 17 illustrates the relative timing in a cacher job with full sequence conversion.

圖18描述鎖存器在快取頁複製作業中之部署。Figure 18 depicts the deployment of a latch in a cache page copy job.

圖19A及圖19B說明快取頁複製作業中之相對時序。19A and 19B illustrate the relative timing in the cache page copy job.

圖20A說明在每一記憶體單元使用LM代碼儲存兩個位元之資料時4態記憶體陣列之臨限電壓分布。Figure 20A illustrates the threshold voltage distribution of a 4-state memory array when each memory cell uses the LM code to store two bits of data.

圖20B說明使用LM代碼在現有2循環程式化機制中進行之下部頁程式化。Figure 20B illustrates the use of LM code to perform the lower page stylization in the existing 2-loop stylization mechanism.

圖20C說明使用LM代碼在現有2循環程式化機制中進行之上部頁程式化。Figure 20C illustrates the use of LM code for upper page stylization in an existing 2-loop stylization mechanism.

圖20D說明瞭解以LM代碼編碼之4態記憶體之下部位元所需的讀取作業。Figure 20D illustrates the read operation required to understand the location elements below the 4-state memory encoded in the LM code.

圖20E說明瞭解以LM代碼編碼之4態記憶體之上部位元所需的讀取作業。Figure 20E illustrates the read operation required to understand the location elements above the 4-state memory encoded in the LM code.

圖21為說明將下一頁程式化資料載入未使用之資料鎖存器中之背景作業的下部頁程式化之示意時序圖。Figure 21 is a schematic timing diagram illustrating the programming of the lower page of the background job loading the next page of stylized data into the unused data latches.

圖22為展示在使用QWP之4態上部頁或全序列程式化之各種階段期間需追蹤的狀態之數目的表。Figure 22 is a table showing the number of states to be tracked during various stages of the 4-page upper page or full sequence stylization using QWP.

圖23為說明將下一頁程式化資料載入未使用之資料鎖存器中之背景作業的上部頁或全序列程式化之示意時序圖。Figure 23 is a schematic timing diagram illustrating the upper page or full sequence stylization of the background job loading the next page of stylized data into the unused data latches.

圖24為說明根據本發明之一般實施例的與當前多階段記憶體作業同時發生之鎖存器作業之流程圖。24 is a flow diagram illustrating a latch operation occurring concurrently with a current multi-stage memory job in accordance with a general embodiment of the present invention.

圖25為下部頁程式化之示意時序圖,其說明使用可用鎖存器而進行之讀取中斷作業。Figure 25 is a schematic timing diagram of the lower page stylization illustrating the read interrupt operation using the available latches.

圖26為上部頁程式化之示意時序圖,其說明使用可用鎖存器而進行之讀取中斷作業。Figure 26 is a schematic timing diagram of the upper page stylization illustrating the read interrupt operation using the available latches.

圖27說明與典型記憶體作業相關聯之資訊之封裝。Figure 27 illustrates the encapsulation of information associated with a typical memory job.

圖28說明支援簡單快取作業之習知記憶體系統。Figure 28 illustrates a conventional memory system that supports simple cache operations.

圖29為說明多個記憶體作業之排入佇列及可能合併之流程圖。Fig. 29 is a flow chart showing the arrangement of a plurality of memory jobs and possible merging.

圖30說明併有記憶體作業佇列及記憶體作業佇列管理器之較佳晶片上控制電路之示意方塊圖。Figure 30 illustrates a schematic block diagram of a preferred on-wafer control circuit with a memory operating array and a memory operating array manager.

圖31為說明抹除作業期間在背景中之快取作業之示意流程圖。Figure 31 is a schematic flow chart illustrating the cache operation in the background during the erase operation.

圖32為對記憶體陣列進行之抹除作業之示意時序圖,其說明抹除作業之第一抹除階段期間之程式化資料載入作業。Figure 32 is a schematic timing diagram of an erase operation performed on a memory array illustrating a stylized data loading operation during the first erase phase of the erase operation.

圖33為對記憶體陣列進行之抹除作業之示意時序圖,其說明抹除作業之軟式程式化/驗證階段期間之程式化資料載入作業。Figure 33 is a schematic timing diagram of an erase operation performed on a memory array illustrating a stylized data loading operation during the soft stylization/verification phase of the erase operation.

圖34為對記憶體陣列進行之抹除作業之示意時序圖,其說明***之讀取作業及使用可用鎖存器而進行之所得資料輸出作業。Figure 34 is a schematic timing diagram of an erase operation performed on a memory array, illustrating the read read operation and the resulting data output operation using the available latches.

圖35為說明圖31之步驟780中在抹除作業期間在背景中用於讀取擦洗應用之特定快取作業之示意流程圖。35 is a schematic flow diagram illustrating a particular cache job for reading a scrubbing application in the background during an erase operation in step 780 of FIG.

圖36說明抹除期間之優先背景讀取。Figure 36 illustrates a prioritized background read during erase.

圖37示意地說明典型讀取快取機制。Figure 37 schematically illustrates a typical read cache mechanism.

圖38A為關於快取讀取以LM代碼編碼之邏輯頁的示意時序圖。Figure 38A is a schematic timing diagram for a cache read logical page encoded with LM code.

圖38B為關於以LM代碼進行之快取讀取在尚未對上部位元邏輯頁進行程式化時讀取下部位元邏輯頁之特殊情形中的示意時序圖。Figure 38B is a schematic timing diagram in a special case of a cache read by LM code in which a lower-part logical page is read when the upper-part meta-logic page has not been programmed.

圖39說明對於2位元記憶體以所有位元感應而進行之快取讀取之示意時序圖。Figure 39 illustrates a schematic timing diagram for a cache read of a 2-bit memory with all bit sensing.

圖40說明一記憶體之實例,其具有2位元記憶體單元且使其頁以最佳序列程式化從而最小化鄰近字線上之記憶體單元之間的Yupin效應。Figure 40 illustrates an example of a memory having a 2-bit memory cell and having its pages programmed in an optimal sequence to minimize the Yupin effect between memory cells on adjacent word lines.

圖41說明根據圖37所示之習知機制對於LM代碼連同LA校正之讀取快取的實施。Figure 41 illustrates an implementation of a read cache for LM code along with LA correction in accordance with the conventional mechanism illustrated in Figure 37.

圖42說明以LM代碼連同LA校正進行之改良讀取快取機制。Figure 42 illustrates an improved read cache mechanism with LM code along with LA correction.

圖43為說明改良讀取快取之示意流程圖。Figure 43 is a schematic flow diagram illustrating an improved read cache.

圖44為以進一步之清晰度說明圖43之步驟850的示意流程圖。Figure 44 is a schematic flow diagram illustrating step 850 of Figure 43 with further clarity.

圖45為以進一步之清晰度說明圖43之步驟830的示意流程圖。Figure 45 is a schematic flow diagram illustrating step 830 of Figure 43 with further clarity.

(無元件符號說明)(no component symbol description)

Claims (35)

一種具有記憶體單元之可定址頁之非揮發性記憶體裝置,其包含:向一經定址之頁之每一記憶體單元提供的一資料鎖存器集合,該資料鎖存器集合具有鎖存一預定數目之位元之能力;一用於控制該經定址之頁上之一當前記憶體作業之控制電路,該當前記憶體作業在作業期間具有一或多個階段,每一階段與作業狀態之一預定集合相關聯;一階段相依之編碼,其對於每一階段提供以使得對於該等階段中之至少一些而言,其作業狀態之集合以實質上一最小量之位元編碼從而使空閒資料鎖存器之一子集自由;且該控制電路與該當前記憶體作業同時地對空閒資料鎖存器之該子集進行控制一或多個作業,該或該等作業的資料係與對於記憶體陣列進行之一或多個未決記憶體作業相關。 A non-volatile memory device having addressable pages of a memory unit, comprising: a data latch set provided to each memory cell of an addressed page, the data latch set having a latch a predetermined number of bits of capability; a control circuit for controlling a current memory job on the addressed page, the current memory job having one or more phases during the job, each phase and the job state a predetermined set of associations; a phase-dependent code that is provided for each stage such that for at least some of the stages, the set of job states is encoded with substantially a minimum number of bits to enable idle data A subset of the latches is free; and the control circuit controls one or more jobs of the subset of idle data latches simultaneously with the current memory job, the data of the or the job and the memory The volume array is associated with one or more pending memory jobs. 如請求項1之非揮發性記憶體裝置,其中對空閒資料鎖存器之該子集進行之該或該等作業包括快取與一或多個後續記憶體作業相關之資料。 The non-volatile memory device of claim 1, wherein the or the operations performed on the subset of idle data latches include caching data associated with one or more subsequent memory operations. 如請求項2之非揮發性記憶體裝置,其中該與一或多個後續記憶體作業相關之資料係自該記憶體裝置之外部供應。 The non-volatile memory device of claim 2, wherein the data associated with one or more subsequent memory operations is supplied from outside the memory device. 如請求項1之非揮發性記憶體裝置,其中對空閒資料鎖 存器之該子集進行之該或該等作業包括將該與一或多個後續記憶體作業相關之資料轉移至空閒資料鎖存器之該子集中。 A non-volatile memory device as claimed in claim 1, wherein the idle data lock is The or the operations performed by the subset of registers include transferring the data associated with one or more subsequent memory jobs to the subset of idle data latches. 如請求項2之非揮發性記憶體裝置,其中該或該等後續記憶體作業為程式作業且對空閒資料鎖存器之該子集進行之該或該等作業包括快取該程式化資料。 The non-volatile memory device of claim 2, wherein the or subsequent memory operations are program operations and the or the operations of the subset of idle data latches include caching the stylized data. 如請求項2之非揮發性記憶體裝置,其中該與一或多個後續記憶體作業相關之資料與不同於記憶體單元之該經定址之頁的另一頁相關聯。 A non-volatile memory device as claimed in claim 2, wherein the material associated with one or more subsequent memory operations is associated with another page that is different from the addressed page of the memory unit. 如請求項6之非揮發性記憶體裝置,其中該當前記憶體作業為一程式作業。 The non-volatile memory device of claim 6, wherein the current memory job is a program job. 如請求項2之非揮發性記憶體裝置,其中該與一或多個後續記憶體作業相關之資料與記憶體單元之該經定址之頁相關聯。 A non-volatile memory device as claimed in claim 2, wherein the material associated with one or more subsequent memory operations is associated with the addressed page of the memory unit. 如請求項8之非揮發性記憶體裝置,其中該當前記憶體作業為一程式作業。 The non-volatile memory device of claim 8, wherein the current memory job is a program job. 如請求項2之非揮發性記憶體裝置,其中該與一或多個後續記憶體作業相關之資料由該記憶體裝置供應。 The non-volatile memory device of claim 2, wherein the data associated with one or more subsequent memory operations is supplied by the memory device. 如請求項2之非揮發性記憶體裝置,其中對空閒資料鎖存器之該子集進行之該或該等作業包括將該與一或多個後續記憶體作業相關之資料轉移出空閒資料鎖存器之該子集。 The non-volatile memory device of claim 2, wherein the or the performing of the subset of the free data latches comprises transferring the data associated with the one or more subsequent memory operations out of the idle data lock This subset of registers. 如請求項2之非揮發性記憶體裝置,其中該或該等後續記憶體作業為讀取作業且對空閒資料鎖存器之該子集進 行之該或該等作業包括快取該讀取資料。 The non-volatile memory device of claim 2, wherein the or subsequent memory jobs are read jobs and the subset of idle data latches are The job or the job includes fetching the read data. 如請求項2之非揮發性記憶體裝置,其中該與一或多個後續記憶體作業相關之資料為來自不同於記憶體單元之該經定址之頁的另一頁之讀取資料。 A non-volatile memory device as claimed in claim 2, wherein the material associated with the one or more subsequent memory operations is read data from another page other than the addressed page of the memory unit. 如請求項13之非揮發性記憶體裝置,其中該當前記憶體作業為一程式作業。 The non-volatile memory device of claim 13, wherein the current memory job is a program job. 一種具有記憶體單元之若干可定址頁之非揮發性記憶體裝置,其包含:向一經定址之頁之每一記憶體單元提供的一資料鎖存器集合,該資料鎖存器集合具有鎖存一預定數目之位元之能力;用於控制該經定址之頁上之一當前記憶體作業之構件,該當前記憶體作業在作業期間具有一或多個階段,每一階段與作業狀態之一預定集合相關聯;一階段相依之編碼,其對於每一階段提供以使得對於該等階段中之至少一些而言,其作業狀態之集合以實質上一最小量之位元編碼從而使空閒資料鎖存器之一子集自由;及用於與該當前記憶體作業同時地對空閒資料鎖存器之該子集進行控制一或多個作業之構件,該或該等作業的資料係與對於記憶體陣列進行之一或多個未決記憶體作業相關。 A non-volatile memory device having addressable pages of a memory unit, comprising: a data latch set provided to each memory cell of an addressed page, the data latch set having a latch a predetermined number of bits; a means for controlling a current memory job on the addressed page, the current memory job having one or more phases during the job, one of each phase and one of the job states A predetermined set of correlations; a phase-dependent code that is provided for each stage such that for at least some of the stages, the set of job states is encoded with substantially a minimum number of bits to lock the idle data a subset of the registers is free; and means for controlling one or more jobs of the subset of idle data latches simultaneously with the current memory job, the data of the or the job and the memory The volume array is associated with one or more pending memory jobs. 如請求項1至15任一項中之非揮發性記憶體裝置,其中該等記憶體單元各儲存一位元之資料。 The non-volatile memory device of any one of claims 1 to 15, wherein each of the memory cells stores one-bit data. 如請求項1至15任一項中之非揮發性記憶體裝置,其中該等記憶體單元各儲存兩個位元之資料。 The non-volatile memory device of any one of claims 1 to 15, wherein the memory cells each store two bits of data. 如請求項1至15任一項中之非揮發性記憶體裝置,其中該等記憶體單元各儲存兩個以上之位元之資料。 The non-volatile memory device of any one of claims 1 to 15, wherein the memory cells each store data of more than two bits. 一種操作一具有記憶體單元之可定址之頁的非揮發性記憶體之方法,其包含:向一經定址之頁之每一記憶體單元提供一具有鎖存一預定數目之位元之能力的資料鎖存器集合;對該經定址之頁執行一當前記憶體作業,該記憶體作業具有一或多個階段,每一階段與作業狀態之一預定集合相關聯;對於每一階段提供一階段相依之編碼以使得對於該等階段中之至少一些而言,其作業狀態之預定集合以實質上一最小量之位元編碼從而使空閒資料鎖存器之一子集自由;及與該當前記憶體作業同時地對空閒資料鎖存器之該子集執行一或多個作業,其資料與對於記憶體陣列進行之一或多個後續記憶體作業相關。 A method of operating a non-volatile memory having addressable pages of a memory cell, comprising: providing each memory cell of an addressed page with a data having the ability to latch a predetermined number of bits a set of latches; performing a current memory job on the addressed page, the memory job having one or more phases, each phase being associated with a predetermined set of one of the job states; providing a phase dependent for each phase Encoding such that for at least some of the stages, a predetermined set of operational states is encoded with substantially a minimum number of bits to free a subset of the free data latches; and with the current memory The job simultaneously performs one or more jobs on the subset of free data latches whose data is associated with one or more subsequent memory jobs for the memory array. 如請求項19之方法,其中對空閒資料鎖存器之該子集進行之該或該等作業包括快取該與一或多個後續記憶體作業相關之資料。 The method of claim 19, wherein the or the performing of the subset of the free data latches comprises fetching the data associated with the one or more subsequent memory jobs. 如請求項20之方法,其中該與一或多個後續記憶體作業相關之資料係自該記憶體裝置之外部供應。 The method of claim 20, wherein the data associated with one or more subsequent memory operations is supplied from outside the memory device. 如請求項19之方法,其中對空閒資料鎖存器之該子集進 行之該或該等作業包括將該與一或多個後續記憶體作業相關之資料轉移至空閒資料鎖存器之該子集中。 The method of claim 19, wherein the subset of idle data latches is The or the operations include transferring the data associated with one or more subsequent memory jobs to the subset of idle data latches. 如請求項20之方法,其中該或該等後續記憶體作業為程式作業且對空閒資料鎖存器之該子集進行之該或該等作業包括快取該程式化資料。 The method of claim 20, wherein the or subsequent memory jobs are program jobs and the or the operations of the subset of idle data latches include caching the stylized data. 如請求項20之方法,其中該與一或多個後續記憶體作業相關之資料與不同於記憶體單元之該經定址之頁的另一頁相關聯。 The method of claim 20, wherein the material associated with the one or more subsequent memory jobs is associated with another page that is different from the addressed page of the memory unit. 如請求項24之方法,其中該當前記憶體作業為一程式作業。 The method of claim 24, wherein the current memory job is a program job. 如請求項20之方法,其中該與一或多個後續記憶體作業相關之資料與記憶體單元之該經定址之頁相關聯。 The method of claim 20, wherein the material associated with the one or more subsequent memory operations is associated with the addressed page of the memory unit. 如請求項26之方法,其中該當前記憶體作業為一程式作業。 The method of claim 26, wherein the current memory job is a program job. 如請求項20之方法,其中該與一或多個後續記憶體作業相關之資料係由該記憶體裝置供應。 The method of claim 20, wherein the data associated with one or more subsequent memory operations is supplied by the memory device. 如請求項20之方法,其中對空閒資料鎖存器之該子集進行之該或該等作業包括將該與一或多個後續記憶體作業相關之資料轉移出空閒資料鎖存器之該子集。 The method of claim 20, wherein the or the performing of the subset of the free data latches comprises transferring the data associated with the one or more subsequent memory jobs out of the child of the free data latch set. 如請求項20之方法,其中該或該等後續記憶體作業為讀取作業且對空閒資料鎖存器之該子集進行之該或該等作業包括快取該讀取資料。 The method of claim 20, wherein the or subsequent memory jobs are read jobs and the or the jobs for the subset of idle data latches include fetching the read data. 如請求項20之方法,其中該與一或多個後續記憶體作業相關之資料為來自不同於記憶體單元之該經定址之頁的 另一頁之讀取資料。 The method of claim 20, wherein the material associated with the one or more subsequent memory operations is from the addressed page other than the memory unit. Read the data on another page. 如請求項31之方法,其中該當前記憶體作業為一程式作業。 The method of claim 31, wherein the current memory job is a program job. 如請求項19至32中任一項之方法,其中該等記憶體單元各儲存一位元之資料。 The method of any one of clauses 19 to 32, wherein each of the memory units stores one bit of data. 如請求項19至32中任一項之方法,其中該等記憶體單元各儲存兩個位元之資料。 The method of any one of clauses 19 to 32, wherein the memory cells each store two bits of data. 如請求項19至32中任一項之方法,其中該等記憶體單元各儲存兩個以上之位元之資料。The method of any one of claims 19 to 32, wherein the memory cells each store data for more than two bits.
TW96115926A 2006-05-05 2007-05-04 Non-volatile memory with background data latch caching during program operations and methods therefor TWI427637B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/382,006 US7505320B2 (en) 2005-04-01 2006-05-05 Non-volatile memory with background data latch caching during program operations
US11/381,995 US7502260B2 (en) 2005-04-01 2006-05-05 Method for non-volatile memory with background data latch caching during program operations

Publications (2)

Publication Number Publication Date
TW200809862A TW200809862A (en) 2008-02-16
TWI427637B true TWI427637B (en) 2014-02-21

Family

ID=44800303

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96115926A TWI427637B (en) 2006-05-05 2007-05-04 Non-volatile memory with background data latch caching during program operations and methods therefor

Country Status (1)

Country Link
TW (1) TWI427637B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1209568A1 (en) * 1999-02-22 2002-05-29 Hitachi, Ltd. Memory card, method for allotting logical address, and method for writing data
US20040060031A1 (en) * 2002-09-24 2004-03-25 Sandisk Corporation Highly compact non-volatile memory and method thereof
US20040109357A1 (en) * 2002-09-24 2004-06-10 Raul-Adrian Cernea Non-volatile memory and method with improved sensing
EP1473737A1 (en) * 2002-02-08 2004-11-03 Matsushita Electric Industrial Co., Ltd. Non-volatile storage device and control method thereof
US6856568B1 (en) * 2000-04-25 2005-02-15 Multi Level Memory Technology Refresh operations that change address mappings in a non-volatile memory
US20050257120A1 (en) * 2004-05-13 2005-11-17 Gorobets Sergey A Pipelined data relocation and improved chip architectures
US20060031593A1 (en) * 2004-08-09 2006-02-09 Sinclair Alan W Ring bus structure and its use in flash memory systems
US7009878B2 (en) * 2000-03-08 2006-03-07 Kabushiki Kaisha Toshiba Data reprogramming/retrieval circuit for temporarily storing programmed/retrieved data for caching and multilevel logical functions in an EEPROM
US7038946B2 (en) * 2002-02-06 2006-05-02 Kabushiki Kaisha Toshiba Non-volatile semiconductor memory device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1209568A1 (en) * 1999-02-22 2002-05-29 Hitachi, Ltd. Memory card, method for allotting logical address, and method for writing data
US7009878B2 (en) * 2000-03-08 2006-03-07 Kabushiki Kaisha Toshiba Data reprogramming/retrieval circuit for temporarily storing programmed/retrieved data for caching and multilevel logical functions in an EEPROM
US6856568B1 (en) * 2000-04-25 2005-02-15 Multi Level Memory Technology Refresh operations that change address mappings in a non-volatile memory
US7038946B2 (en) * 2002-02-06 2006-05-02 Kabushiki Kaisha Toshiba Non-volatile semiconductor memory device
EP1473737A1 (en) * 2002-02-08 2004-11-03 Matsushita Electric Industrial Co., Ltd. Non-volatile storage device and control method thereof
US20040060031A1 (en) * 2002-09-24 2004-03-25 Sandisk Corporation Highly compact non-volatile memory and method thereof
US20040109357A1 (en) * 2002-09-24 2004-06-10 Raul-Adrian Cernea Non-volatile memory and method with improved sensing
US7023736B2 (en) * 2002-09-24 2006-04-04 Sandisk Corporation Non-volatile memory and method with improved sensing
US20050257120A1 (en) * 2004-05-13 2005-11-17 Gorobets Sergey A Pipelined data relocation and improved chip architectures
US20060031593A1 (en) * 2004-08-09 2006-02-09 Sinclair Alan W Ring bus structure and its use in flash memory systems

Also Published As

Publication number Publication date
TW200809862A (en) 2008-02-16

Similar Documents

Publication Publication Date Title
KR101400999B1 (en) Non-volatile memory with background data latch caching during read operations and methods therefor
US8036041B2 (en) Method for non-volatile memory with background data latch caching during read operations
US7463521B2 (en) Method for non-volatile memory with managed execution of cached data
EP2070090A1 (en) Pseudo random and command driven bit compensation for the cycling effects in flash memory and methods therefor
EP2016590B1 (en) Non-volatile memory with background data latch caching during read operations and methods therefor
WO2007131127A2 (en) Merging queued memory operation in a non-volatile memory
TWI427637B (en) Non-volatile memory with background data latch caching during program operations and methods therefor
WO2007130976A2 (en) Non-volatile memory with background data latch caching during program operations and methods therefor
TW200809863A (en) Non-volatile memory with background data latch caching during erase operations and methods therefor

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees