CN109669901A - A kind of server - Google Patents
A kind of server Download PDFInfo
- Publication number
- CN109669901A CN109669901A CN201811466950.8A CN201811466950A CN109669901A CN 109669901 A CN109669901 A CN 109669901A CN 201811466950 A CN201811466950 A CN 201811466950A CN 109669901 A CN109669901 A CN 109669901A
- Authority
- CN
- China
- Prior art keywords
- gpu
- piece
- pcie
- muti
- gpu board
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 239000007787 solid Substances 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 12
- 238000011161 development Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/183—Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4063—Device-to-bus coupling
- G06F13/4068—Electrical coupling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0026—PCI express
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Power Engineering (AREA)
- Human Computer Interaction (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a kind of servers, comprising: mainboard is equipped with thereon: the first CPU, at least one first order PCIE being connected with the first CPU switching chip, at least one second level PCIE switching chip being respectively connected with first order PCIE switching one of chip;Muti-piece GPU board is equipped with two GPU on every piece of GPU board;And backboard makes every piece of GPU board switch one of chip with the second level PCIE and connect, for making the first CPU and GPU carry out PCIE bus bar for distinguishing grafting mainboard and muti-piece GPU board.Server of the invention can extend PCIE resource according to user demand using PCIE switching chip, reduce hardware development cost and client's purchase cost;It can be improved space utilization rate by changing Design of Hardware Architecture simultaneously, reduce the space cost of client's computer room.
Description
Technical field
The present invention relates to server design fields, more specifically, particularly relating to the server of more GPU of high density a kind of.
Background technique
At the beginning of being born from computer, artificial intelligence is exactly ultimate pursuit of the mankind to computer.With computer in recent years
Horizontal swift and violent promotion, recognition of face, automatic driving function development are rapid, these applications are all based on deep learning algorithm, and
It is powerful to support that the hardware platform of the algorithm is all based on greatly GPU (Graphics Processing Unit, image processor) at present
Concurrent operation ability.
Every CPU maximum based on Intel Purley platform supports 48 channels (Lane), and traditional server is answered
It is enough with this, but requirement is obviously unable to satisfy for high density server, macrooperation application.In general, high-end aobvious
Cacaine 3D operation handling capacity is huge to usually require the channel X16, and such server platform has specific demand to PCIE resource.Therefore,
To meet the needs of superpower operational capability, current server design generallys use more CPU, and by multiple standard GPU card collection
At in cabinet inside.Specific design method are as follows: the PCIE resource of CPU is directly interconnected with GPU, and passes through algorithm model (instruction
Collection) control GPU completion acceleration processing.In other words, it if promoting operational capability by integrating more GPU, needs more
CPU is interconnected therewith.CPU's increases, and not only increases hardware development cost, also increases client's purchase cost.In addition, in tradition
Exploitation design in, GPU server include two parts: head, BOX tail.Wherein, mainboard is located at head, GPU is located at GPU
BOX tail;Two parts are mutually indepedent and realize interconnection by cable.By taking the cabinet of 4U height as an example, wherein can only at most integrate 8
GPU, therefore the design method of the prior art reduces the utilization rate in space, increases the space cost of client's computer room.It is based on
Above-mentioned two o'clock is unfavorable for the marketing of this type server.
In view of the above-mentioned defects in the prior art, this field urgently need one kind can be realized the more GPU of High Density Integration with
It reduces each side's cost and improves the scheme of space utilization rate.
Summary of the invention
In view of this, the purpose of the embodiment of the present invention is to propose a kind of server, the prior art is able to solve using more
The problem of CPU extended arithmetic resource causes each side's cost to increase, integrated level reduces.
Based on above-mentioned purpose, the one side of the embodiment of the present invention provides a kind of server, comprising:
Mainboard is equipped with thereon:
First CPU;
At least one first order PCIE switches chip, is connected with the first CPU;
At least one second level PCIE switches chip, and each second level PCIE switching chip and first order PCIE switch chip
One of be connected;
Muti-piece GPU board is equipped with two GPU on every piece of GPU board;And
Backboard switches every piece of GPU board and the second level PCIE for distinguishing grafting mainboard and muti-piece GPU board
One of chip connection, for making the first CPU and GPU carry out PCIE bus bar.
In some embodiments, backboard occupies 6U height space, wherein top 1U height space is used for and mainboard phase
Even, middle part 4U height space is used to be connected with muti-piece GPU board and lower part 1U height space with power panel for being connected.
In some embodiments, 1U height space in top is provided with high speed connector and power connector, and high speed connects
Device is connected with second level PCIE switching chip, for interconnecting mainboard and muti-piece GPU board, distributes PCIE for muti-piece GPU board
Resource, and power connector provides 12V input voltage for mainboard.
In some embodiments, 4U height space in middle part is provided with high speed connector and power connector, and high speed connects
Device is connected with muti-piece GPU board, provides PCIE resource for muti-piece GPU board, and power connector provides for muti-piece GPU board
12V input voltage.
In some embodiments, 1U height space in lower part is provided with power supply/signal connector, power supply/signal connector
It is connected with power panel, mainboard and muti-piece GPU board pass through power supply/signal connector and obtain 12V input voltage from power panel, and
Mainboard controls power panel by power supply/signal connector and obtains the relevant information of power panel.
In some embodiments, muti-piece GPU board includes the first SXM2GPU interconnected by NVLink buckle connector
Board and the 2nd SXM2GPU board are separately installed with two SXM2 systems on the first SXM2GPU board and the 2nd SXM2GPU board
Arrange GPU.
In some embodiments, the first SXM2GPU board provides control signal for the 2nd SXM2GPU board.
In some embodiments, each piece of GPU board includes the bottom GPU card and Riser card, and the bottom GPU card is connected with backboard,
Riser card is mounted on the card of the bottom GPU.
In some embodiments, the Tesla P4GPU or overall height overall length of half Gao Banchang are installed on Riser card
Tesla P100/V100GPU。
In some embodiments, the 2nd CPU is also equipped on mainboard, the 2nd CPU and South Bridge chip and solid state hard disk are mutual
Connection.
The present invention has following advantageous effects: a kind of server provided in an embodiment of the present invention switches core using PCIE
Piece can extend PCIE resource according to user demand, reduce hardware development cost and client's purchase cost;Pass through change simultaneously
Design of Hardware Architecture can be improved space utilization rate, reduce the space cost of client's computer room, and server of the invention is supported
A variety of GPU boards meet the Demand Design scheme of different application scene.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
Other embodiments are obtained according to these attached drawings.
Fig. 1 is to interconnect block diagram according to the server system of one embodiment of the invention;
Fig. 2 is the block diagram according to the backboard board of one embodiment of the invention;
Fig. 3 is the block diagram according to the SXM2GPU A card of one embodiment of the invention;
Fig. 4 is the block diagram according to the SXM2GPU B card of one embodiment of the invention;
Fig. 5 is to interconnect configuration schematic diagram according to double GPU card NVLink of one embodiment of the invention;
Fig. 6 is to interconnect configuration schematic diagram according to four GPU card NVLink of one embodiment of the invention;
Fig. 7 is the schematic diagram according to the bottom the PCIE GPU card of one embodiment of the invention;
Fig. 8 is the schematic diagram according to the GPU Riser A card of one embodiment of the invention;And
Fig. 9 is the schematic diagram according to the GPU Riser B card of one embodiment of the invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference
The embodiment of the present invention is further described in attached drawing.
It should be noted that all statements for using " first ", " second " etc. are for distinguishing in the embodiment of the present invention
The non-equal entity of two or more same names or non-equal parameter, it is seen that " first ", " second " etc. are only for statement
Convenience, should not be construed as the restriction to the embodiment of the present invention, subsequent embodiment no longer illustrates this one by one.
Based on above-mentioned purpose, the embodiment of the present invention proposes a kind of one embodiment of server.Shown in fig. 1 is the clothes
Device system of being engaged in interconnects block diagram.
As shown in fig. 1, which mainly includes mainboard 10, muti-piece GPU board 20 and backboard 30.Pacify on mainboard 10
It is equipped with: the first CPU 101;At least one first order PCIE switches chip 102, is connected with the first CPU 101;And at least one
Second level PCIE switches chip 103, one of each second level PCIE switching chip 103 and first order PCIE switching chip 102 phase
Even.Two GPU are installed on every piece of GPU board in muti-piece GPU board 20.Backboard 30 is for distinguishing grafting mainboard 10 and muti-piece
GPU board 20 makes every piece of GPU board 20 and second level PCIE switch one of chip 103 and connect, with for make the first CPU101 with
GPU carries out PCIE bus bar.
Specifically, the PCIE root port of the first CPU 101 switches core by one group of PCIE X16 and first order PCIE
The uplink port (upstream port) of piece 102 interconnects, and it includes 4 downlink ports that first order PCIE, which switches chip 102,
(downstream port), uplink port of each downlink port respectively with second level PCIE switching chip 103 interconnect, also
It is to say, switches chip 103 comprising 4 second level PCIE altogether in the embodiment.Every second level PCIE switches chip 103
4 downlink ports, 4 second level PCIE switching chips 103 can provide 16 groups of PCIE X16 resources, this 16 groups of PCIE X16
Resource can distribute to 16 GPU, and operation, the analysis for big data are handled.The first order PCIE of the embodiment of the present invention switches
Chip 102 and second level PCIE switching chip 103 can be PEX8780 chip, and every PEX8780 chip includes 80 Gen 3
PCIE channel resource, at most configurable 5 groups of PCIE X16 or other any configurations combine, and user can establish high property according to demand
The application of energy, low latency.The 2nd CPU 104, the 2nd CPU 104 and South Bridge chip are also equipped on mainboard 10
(PCH) and the equipment interconnection such as solid state hard disk (M.2SSD), configuration of the server system to certain basic function demands is realized.First
CPU 101 can also be interconnected with standard PCIE Slot.Every CPU can configure PCIE resource by BIOS system.Every block of GPU plate
The GPU of two NVIDIA can be installed on card 20.Therefore, in this embodiment, 8 pieces of GPU boards 20 is needed to carry 16
GPU.Backboard 30 is located at cabinet medium position and perpendicular to chassis bottom, mainly by portions such as high speed connector, power connectors
Part composition.Backboard 30 is connected with mainboard 10 and muti-piece GPU board 20 respectively, carries out PCIE bus for the first CPU 101 and GPU
Interconnection.
The server of the embodiment of the present invention can extend PCIE resource according to user demand using PCIE switching chip, reduce
Hardware development cost and client's purchase cost;It can be improved space utilization rate by changing Design of Hardware Architecture simultaneously, reduce
The space cost of client's computer room.
In a preferred embodiment, backboard 30 occupies 6U height space (6U is equal to 267mm), wherein top 1U height
Space with mainboard 10 for being connected, and middle part 4U height space is used to be connected with muti-piece GPU board 20 and lower part 1U height space
For being connected with power panel.
Fig. 2 is the block diagram according to the backboard board of one embodiment of the invention.Top 1U height space is provided with multiple high
Fast connector (High Speed Connector, HS Conn) and multiple power connectors (Power Conn), high speed connector
It is connected with second level PCIE switching chip 103, for interconnecting mainboard 10 and muti-piece GPU board 20, divides for muti-piece GPU board 20
With PCIE resource, and power connector is that mainboard 10 provides 12V input voltage.Middle part 4U height space is provided with multiple high speeds
Connector and multiple power connectors, high speed connector are connected with muti-piece GPU board 20, provide PCIE for muti-piece GPU board 20
Resource, control signal, and power connector is that muti-piece GPU board 20 provides 12V input voltage.The setting of lower part 1U height space
There is multiple power supply/signal connectors (Power/Signal Conn), power supply/signal connector is connected with power panel, 10 He of mainboard
Muti-piece GPU board 20 obtains 12V input voltage from power panel by power supply/signal connector, and mainboard 10 passes through power supply/letter
Number connector control power panel simultaneously obtains the relevant information of power panel to monitor the working condition of power panel.
In a preferred embodiment, GPU board includes the first SXM2GPU plate interconnected by NVLink buckle connector
Block (A card) and the 2nd SXM2GPU board (B card), is separately installed with two on the first SXM2GPU board and the 2nd SXM2GPU board
SXM2 series GPU:P100, V100.
Fig. 3 is the block diagram according to the SXM2GPU A card of one embodiment of the invention.Fig. 4 is according to one implementation of the present invention
The block diagram of the SXM2GPU B card of example.As shown in Figures 3 and 4, SXM2GPU A card can pass through PCIE high speed connector and electricity
Source connector and backboard interconnection (PCIE bus, control signal, 12V input voltage are provided), and connected by NVLink buckle
Device (being located at board front) is interconnected with SXM2GPU B card.SXM2GPU B card can be connected by PCIE high speed connector with power supply
Device and backboard interconnection (PCIE bus, control signal, 12V input voltage are provided), and (be located at by NVLink buckle connector
The board back side) it is interconnected with SXM2GPU A card.
In the above-described embodiments, there are three types of configurations for SXM2GPU A card and SXM2GPU B card: when being configured to double GPU's
When NVLink bus bar, it can only select A card or only select B card, as shown in Figure 5, the 1-8 slot position of cabinet is illustratively
It is mounted with SXM2GPU A card;When being configured to the NVLink bus bar of four GPU, A card, B card are arranged in pairs or groups use in pairs,
As shown in Figure 6, pass through buckle connector when installation first to combine A card, B card, then be inserted into the fixation of cabinet simultaneously
Position (slot position 1/2, slot position 3/4, slot position 5/6, slot position 7/8), wherein to meet timing control requirement, A card will provide for B card
Power on the control signal such as enabled, reset, clock.
In a preferred embodiment, GPU board is PCIE GPU board, including the bottom GPU card and Riser card (Riser A
Card, Riser B card), it can support two kinds of GPU.As shown in Figure 7, the bottom GPU card is connected by high speed connector with power supply
Device is connected with backboard, provides the signals such as two groups of PCIE X16 signals, control/clocks by mainboard.Riser A card is pacified by golden finger
On the card of the bottom GPU, while the PCIE X16 slot of two standards is provided, for installing the Tesla P4GPU of half Gao Banchang, such as schemed
Shown in 8;Riser B card is mounted on the card of the bottom GPU by golden finger, while providing the PCIE X16 slot of a standard, is used for
The Tesla P100/V100GPU of overall height overall length is installed, as shown in Figure 9.
The present invention uses two kinds of GPU boards, can support NVIDIA GPU:Tesla SXM2GPU, overall height of three sections of types
Overall length PCIE GPU, half high half long PCIE GPU.
Server of the invention is used to PCH, M.2SSD interconnect in practical applications, using two CPU: one CPU, full
The basic configuration requirement of pedal system;Another CPU and PCIE switching (Switch) chip interconnects, for extending PCIE resource, together
When directly interconnect, standard PCIe card be installed with PCIE Slot.PCIE switching chip is each configured to Base Mode mode, the first order
The uplink port of PCIE switching chip and CPU root port interconnection, the downlink port of second level PCIE switching chip and GPU are mutual
Connection.To meet 16 GPU of High Density Integration, the present invention uses 6U height mechanism design scheme, mainboard on hardware, backboard,
Two kinds of SXM2GPU A/B plate, PCIE GPU bottom plate and Riser A/B plate collocation meet the configuration of the needs of to different model GPU.This
The application range of the server is significantly greatly increased, meets the different demands of client.
It should be understood that the present invention is not limited by embodiment shown in the drawings, also that is, those skilled in the art
Embodiment shown in the drawings can be made under the teachings of the present invention suitably modified.In addition, technical solution of the present invention can be with
It is applied in the principle design of the server of other high density, high integration.
In one or more exemplary designs, the function can be real in hardware, software, firmware or any combination thereof
It is existing.If realized in software, can be stored in using the function as one or more instruction or code computer-readable
It is transmitted on medium or by computer-readable medium.Computer-readable medium includes computer storage media and communication media,
The communication media includes any medium for helping for computer program to be transmitted to another position from a position.Storage medium
It can be any usable medium that can be accessed by a general purpose or special purpose computer.As an example and not restrictive, the computer
Readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disc memory apparatus, disk storage equipment or other magnetic
Property storage equipment, or can be used for carry or storage form be instruct or data structure required program code and can
Any other medium accessed by general or specialized computer or general or specialized processor.In addition, any connection is ok
It is properly termed as computer-readable medium.For example, if using coaxial cable, optical fiber cable, twisted pair, digital subscriber line
(DSL) or such as wireless technology of infrared ray, radio and microwave to send software from website, server or other remote sources,
Then above-mentioned coaxial cable, optical fiber cable, twisted pair, DSL or such as wireless technology of infrared ray, radio and microwave are included in
The definition of medium.As used herein, disk and CD include compact disk (CD), laser disk, CD, digital versatile disc
(DVD), floppy disk, Blu-ray disc, wherein disk usually magnetically reproduce data, and CD using laser optics reproduce data.On
The combination for stating content should also be as being included in the range of computer-readable medium.
It is exemplary embodiment disclosed by the invention above, the disclosed sequence of the embodiments of the present invention is just to retouching
It states, does not represent the advantages or disadvantages of the embodiments.It should be noted that the discussion of any of the above embodiment is exemplary only, it is not intended that
Imply that range disclosed by the embodiments of the present invention (including claim) is limited to these examples, what is limited without departing substantially from claim
Under the premise of range, it may be many modifications and modify.In addition, although element disclosed by the embodiments of the present invention can be with individual
Form description requires, but is unless explicitly limited odd number, it is understood that is multiple.
Claims (10)
1. a kind of server characterized by comprising
Mainboard is equipped with thereon:
First CPU;
At least one first order PCIE switches chip, is connected with the first CPU;
At least one second level PCIE switches chip, and each second level PCIE switching chip and the first order PCIE switch
One of chip is connected;
Muti-piece GPU board is equipped with two GPU on every piece of GPU board;And
Backboard makes every piece of GPU board and the second level PCIE for distinguishing mainboard described in grafting and the muti-piece GPU board
Switch the connection of one of chip, for making the first CPU and GPU carry out PCIE bus bar.
2. server according to claim 1, which is characterized in that the backboard occupies 6U height space, wherein top 1U
Height space with the mainboard for being connected, and middle part 4U height space is used to be connected with the muti-piece GPU board and lower part 1U
Height space with power panel for being connected.
3. server according to claim 2, which is characterized in that the top 1U height space is provided with high speed connector
And power connector, the high speed connector and the second level PCIE switching chip are connected, for make the mainboard with it is described
The interconnection of muti-piece GPU board distributes PCIE resource for the muti-piece GPU board, and the power connector is that the mainboard mentions
For 12V input voltage.
4. server according to claim 2, which is characterized in that the middle part 4U height space is provided with high speed connector
And power connector, the high speed connector are connected with the muti-piece GPU board, provide PCIE money for the muti-piece GPU board
Source, and the power connector provides 12V input voltage for the muti-piece GPU board.
5. server according to claim 2, which is characterized in that the lower part 1U height space is provided with power supply/signal
Connector, the power supply/signal connector are connected with the power panel, and the mainboard and the muti-piece GPU board are described in
Power supply/signal connector obtains 12V input voltage from the power panel, and the mainboard passes through the power supply/signal connection
Device controls the power panel and obtains the relevant information of the power panel.
6. server according to claim 2, which is characterized in that the muti-piece GPU board includes passing through NVLink buckle
The first SXM2 GPU board and the 2nd SXM2 GPU board of connector interconnection, the first SXM2 GPU board and described second
Two SXM2 series GPU are separately installed on SXM2 GPU board.
7. server according to claim 6, which is characterized in that the first SXM2 GPU board is the 2nd SXM2
GPU board provides control signal.
8. server according to claim 2, which is characterized in that each piece of GPU board include the bottom GPU card and
Riser card, the bottom GPU card are connected with the backboard, and the Riser card is mounted on the card of the bottom GPU.
9. server according to claim 8, which is characterized in that install the Tesla of half Gao Banchang on the Riser card
The Tesla P100/V100GPU of P4GPU or overall height overall length.
10. server according to claim 1, which is characterized in that it is also equipped with the 2nd CPU on the mainboard, described
Two CPU and South Bridge chip and solid state hard disk interconnect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466950.8A CN109669901A (en) | 2018-12-03 | 2018-12-03 | A kind of server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466950.8A CN109669901A (en) | 2018-12-03 | 2018-12-03 | A kind of server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109669901A true CN109669901A (en) | 2019-04-23 |
Family
ID=66143594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811466950.8A Withdrawn CN109669901A (en) | 2018-12-03 | 2018-12-03 | A kind of server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109669901A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111737181A (en) * | 2020-06-19 | 2020-10-02 | 苏州浪潮智能科技有限公司 | Heterogeneous processing equipment, system, port configuration method, device and storage medium |
CN112667556A (en) * | 2020-12-23 | 2021-04-16 | 曙光信息产业(北京)有限公司 | GPU server and image processing system |
CN113741642A (en) * | 2021-07-27 | 2021-12-03 | 苏州浪潮智能科技有限公司 | High-density GPU server |
WO2022021298A1 (en) * | 2020-07-31 | 2022-02-03 | Nvidia Corporation | Multi-format graphics processing unit docking board |
-
2018
- 2018-12-03 CN CN201811466950.8A patent/CN109669901A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111737181A (en) * | 2020-06-19 | 2020-10-02 | 苏州浪潮智能科技有限公司 | Heterogeneous processing equipment, system, port configuration method, device and storage medium |
WO2022021298A1 (en) * | 2020-07-31 | 2022-02-03 | Nvidia Corporation | Multi-format graphics processing unit docking board |
CN112667556A (en) * | 2020-12-23 | 2021-04-16 | 曙光信息产业(北京)有限公司 | GPU server and image processing system |
CN113741642A (en) * | 2021-07-27 | 2021-12-03 | 苏州浪潮智能科技有限公司 | High-density GPU server |
CN113741642B (en) * | 2021-07-27 | 2023-08-11 | 苏州浪潮智能科技有限公司 | High-density GPU server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109669901A (en) | A kind of server | |
CN100424668C (en) | Automatic configurating system for PCI-E bus | |
CN108388532A (en) | The AI operations that configurable hardware calculates power accelerate board and its processing method, server | |
CN209879413U (en) | Server structure supporting multiple GPU cards | |
CN102073356B (en) | IO (input/output) expansion module of blade server, and blade and server having the same | |
CN113835487B (en) | System and method for realizing memory pool expansion of high-density server | |
CN108874711B (en) | Hard disk backboard system with optimized heat dissipation | |
CN210466253U (en) | Server with high-density GPU expansion capability | |
CN101894055A (en) | Method for realizing blade mainboard interface with redundancy function | |
CN210776403U (en) | Server architecture compatible with GPUDirect storage mode | |
CN205485799U (en) | Can multiplexing SAS, hard disk backplate of SATA signal | |
CN208752617U (en) | A kind of L-type 2U storage server for supporting 8 disk positions | |
CN216927600U (en) | Network data computing system and server with built-in network data computing system | |
CN104484305B (en) | Server debugging analysis interface device | |
CN206363303U (en) | A kind of CPU module based on VPX structures | |
CN216352292U (en) | Server mainboard and server | |
US8612548B2 (en) | Computer server system and computer server for a computer server system | |
US20070226456A1 (en) | System and method for employing multiple processors in a computer system | |
CN111984584A (en) | Variable node based on domestic Feiteng high-performance processor | |
CN214151687U (en) | Many serial ports extension, many USB's special mainboard of finance based on godson platform | |
CN112260969B (en) | Blade type edge computing equipment based on CPCI framework | |
CN211479015U (en) | Computer mainboard and industrial computer based on explain majestic treaters | |
CN215932518U (en) | Cloud computing ultra-fusion all-in-one machine equipment | |
CN216697259U (en) | Modularized multi-unit server architecture capable of being flexibly configured and expanded | |
CN216352080U (en) | High-density industrial control mainboard based on domestic processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190423 |