Asset ID: |
1-79-1355743.1 |
Update Date: | 2012-07-24 |
Keywords: | |
Solution Type
Predictive Self-Healing Sure
Solution
1355743.1
:
Sun Server X2-8 (formerly Sun Fire X4800 + M2) (G5/G5+) Product Page
Related Items |
- Sun Server X2-8
- Sun Fire X4800 Server
|
Related Categories |
- PLA-Support>Sun Systems>x64>Server>SN-x64: SERVER 64bit
- .Old GCS Categories>Sun Microsystems>Servers>x64 Servers
|
Oracle Confidential (PARTNER). Do not distribute to customers.
Reason: Migration of Panacea Content
Applies to:
Sun Fire X4800 Server - Version Not Applicable and later Sun Server X2-8 - Version Not Applicable and later Information in this document applies to any platform.
Purpose
This document contains the Sun Fire X4800, X4800 M2 & Sun Server X2-8 Product Page.
Details
Sun Server X2-8 (formerly Sun Fire X4800 + M2) (G5/G5+)
Overview:
The Sun Fire X4800 SMP Rack server is a compact, modular 5RU (rack unit) system redefining the enterprise x86 market
with superior performance, outstanding I/O expandability, and unmatched RAS features.
Powered by up to eight Intel Xeon Processor 7500 Series CPUs and with up to 1TB of memory, this server excels in
data warehousing applications such as Oracle's Times Ten. In addition, the Sun Fire X4800 server features two
Network Express Modules, providing customers with eight GbE and 10GbE ports for ultimate connectivity.
New to the x86 market and included in the Sun Fire X4800 server are hot swappable RAS capabilities that offer
a reliable solution to run your mission critical applications. These include hot-pluggable components such as
PCIe ExpressModules, and hot-swappable components such as front accessible disk drives with RAID enabled redundancy
and redundant fans and power supplies. All contribute to increased uptime and ease of serviceability in the case of hardware failure.
The compact, scalable design of the Sun Fire X4800 server saves customers time and money. It provides flexibility
for datacenter growth while minimizing the costs associated with datacenter refresh. In addition to its large memory
footprint and leading reliability, the Sun Fire X4800 server has demonstrated unmatched price performance making this the
ideal replacement server for inefficient legacy HP Itanium and IBM Power systems.
X4800M2 Server was RR 07/12/2011.
PTS Engagement:
Support Alias: [email protected]
Forum: Support_G5
Instant Messages: x64-all (beehive chat room)
Facts:
Processors
- 2 or 4 CMODs (Compute Modules), providing 4 or 8 CPU configuration
- Each CMOD contains 2 Intel Xeon 7500 series (Nehalem-EX) CPU's
- E7540 @ 2.0Ghz, 6-core, 18Mb cache, 105W
- X7550 @ 2.0Ghz, 8-core, 18Mb cache, 130W
- X7560 @ 2.26Ghz, 8-core, 25Mb cache, 130W
- Integrated DDR3 Memory Controller with 4x SMI (Scalable Memory Interconnects) channels in each CPU @6.4 GT/s
- Intel Quick-Path Interconnect (QPI) architecture for inter-CPU and CPU-I/O communication
- 2x up to 6.4GT/s (12.8GB/s per direction) links per processor, 1 to Boxboro and 1 to the other CPU
- 6 or 8 Core CPUs with Turbo Boost mode and Hyper-Threading Chipset
- dual-threaded Nehalem cores
- 24MB shared last level cache
- 45nm process technology
- 64KB per core Level 1 cache
- 256KB per core Level 2 cache
- 8MB on-chip shared Level 3 cache
- 95W to 130W available
Processors M2 version
- 2 or 4 CMODs (Compute Modules), providing 4 or 8 CPU configuration
- Each CMOD contains 2 Intel Xeon E7-8800 series (Westmere-EX) CPU's
- E7-8870 @ 2.40GHz, 10-core, 30MB cache, 130W
- E7-8830 @ 2.13GHz, 8-core, 24MB cache, 105W
- 8 or 10 Core CPUs with Turbo Boost mode and Hyper-Threading Chipset
- 30MB shared last level cache
- 32nm process technology
Memory
- Any Quantity of 8 to 128 Registered DDR3 DIMM's with Extended-ECC (Chipkill) installed per server, 4 to 32 DDR3 DIMM's per CMOD; must have same configuration on each CMOD
- Each CPU controller has an IMC (Integrated Memory Controller) divided in two independent controllers (MBox0 and MBox1)
- Default is to interleave the memory between MBox0 and MBox1 if possible
- Each memory controller has two SMI interconnect to a MB (Mill Brook) buffer, which has 2 DDR3 DIMM channels, supporting 2 DIMM's per channel
- Each MB has 1 SMI interconnect to CPU and 2 DDR3 Channels
- 2, 4, 8, or 16GB size capacity; (16GB with QR DIMMs only - post RR)
- 800, 978 or 1066 Mhz via a memory buffer MB (Mill Brook)
- 2 and 4GB DIMMs will use 1333mhz parts
- 8GB DIMM will use 1066Mhz parts
- 2x FMOD slots per CMOD with battery ESM (Energy Storage Module) supported POST RR
M2 Memory
- 4GB DIMM will use 1333mhz parts
- 8GB DIMM will use 1333Mhz parts
- 16GB DIMM will use 1333Mhz parts
- Support of low voltage DIMMs at 1.35v (Mill Brook 2)
Disks & Storage
- 8x 2.5" 300 or 600GB SAS2 HDD hot-swappable with RAID enabled (600GB post RR)
- 8x 2.5" 500GB SATA2 HDD hot-swappable with RAID enabled (post RR)
- 8x 50 or 100GB SSD drives (post RR)
- 8 SAS interfaces to each of the SFF SAS/SATA Disk Drive Bays with RAID 0,1,5,10 support through REM card only
- REM card has 128MB of write cache, battery backup, either Erie or Niwot LSI based, only on CMOD0
- External Storage Options Here
M2 Disks & Storage
- 8x 2.5" 300 or 600GB SAS2 HDD hot-swappable with RAID enabled
- 8x 32GB SSD drives
- 8 SAS interfaces to each of the SFF SAS/SATA Disk Drive Bays with RAID 0,1,5,10 support through REM card only
- REM card has 128MB of write cache, battery backup, either Erie or Niwot LSI based, only on CMOD0
- M2 External Storage Options Here
Graphics
- A-Speed AST2100 VGA 1280x1024 60HZ 8MB Video memory built-in
Networking
- 8x, 2 per CMOD 10/100/1000BaseT Ethernet (RJ45 Connector) from Intel Kawela PCIe-2 chipset through NEMs
- 8x, 2 per CMOD, 4x 10GbE ports (SFP+ connectors) through NEMs
PCI and Internal I/O
- Per CMOD Intel Boxboro-EX IOH (I/O Hub - QPI <-> PCIe Gen2 bridge) architecture
- Per CMOD 2x 6.4GT/s QPI links, 1 to each CPU
- Per CMOD 2x 8-lane PCIe Gen2
- Per CMOD 1x 8-lane PCIe Gen2 connectivity to FEM controller divided into 2x 4-lane 10GbE ports on NEMs
- On CMOD0 4-lane to SAS/SATA REM controller divided into 2x 2-lane mini SAS2 ports on NEMs
- Per CMOD 4-lane to Kawela GbE onboard chip divided into 2x GbE ports on NEMs
- Per CMOD Intel ICH10R (Integrated Controllers Hub southbridge)
- Enterprise Southbridge Interconnect (ESI) to Boxboro
- Path to boot PROM on LPC or SPI
- USB 2.0, LPC, internal Flash Stick, etc.
- 1x Flash Stick Internal slot
External I/O
- 2x USB2.0 Rear chassis ports through dongle
- 2x Network Express Modules (NEMs) standard
- 4x 10GbE ports (SFP+ connectors) per NEM, FEMs required
- 2x x4 mini SAS2 ports through SAS2 expander
- 4x GbE ports (RJ45 connectors) per NEM
- 1x SAS2 Expander LSI 62132A1 connection to SAS2 external ports and 8x Internal SAS Disks
- 2x PCI Express ExpressModules (EMs) slots
- SAS2 RAID Niwot REM
- 10 Gigabit Ethernet FEM
- 8Gb FC & GbE EM (Metis Qlogic)
- 8Gb FC & GbE EM (Metis Emulex)
- GbE Northstar 4 port Cu EM (Pentwater)
- 10GbE EM (Intel Niantic)
- QDR EM (QMirage)
- SAS2 8 port IT EM (Erie) (post RR)
- SAS2 8 port RAID EM (Niwot) (post RR)
- Supported PCIe ExpressModules
- Supported M2 PCIe ExpressModules
Service Processor, 1 per CMOD
- A-Speed AST2100 CPU and 128MB RAM
- ILOM 3.x with standard BMC/IPMI/Lan/SNMP interfaces
- Out-of-band Management 10/100BaseT Ethernet (RJ45 Connector) through CMM netmgmt port
- Serial Management (RJ45 Connector) through CMM
- Remote KVMS over IP through CMM
CMM (Chassis Management Module)
- A-Speed AST2100 CPU and 128MB RAM
- ILOM 3.x with standard BMC/IPMI/Lan/SNMP interfaces
- Out-of-band Management 10/100BaseT Ethernet (RJ45 Connector)
- Serial Management (RJ45 Connector)
- Remote KVMS over IP
- 8MB video memory
- 1280x1024 max resolution
- UCP (Universal Connector Port) through dongle
- 2 USB 2.0 Ports with rear-accessible connectors
- 1 RS-232 Serial Port switched between System and Management front-accessible RJ45 connector
- 1 VGA port with rear-accessible DB-15 connector
Power Supplies & Cooling
- 4 2000W Hot-swap Redundant PSU's (2+2)
- 4 Hot-swap Redundant N+1 Fans
Software
- Click here for the latest official list
- Solaris 10 10/09 (update 8) + Patches mininum 64-bit only
- Oracle VM Server 2.2.1 E1 (E1 = Errata 1)
- Oracle Enterprise Linux 5.5 64-bit
- SUSE SLES 11 64-bit
- SUSE SLES 11 SP1 64-bit
- RHEL 5.5 64-bit
- RHEL 6.0 64-bit
- Windows 2008 Standard, Enterprise or Data Center Edition R2 64-bit
- Windows 2008 Data Center Edition SP2 64-bit
- VMware ESX/ESXi 4.0 update 1 on 4 and 8 socket configs now since 10/01/10
- VMware ESX/ESXi 4.0 update 2 (4 socket)
Software M2 version
- Click here for the latest official list
- Oracle Solaris 11 Express (64–bit).
- Oracle Solaris 10 9/10 (64–bit) plus patch 144489–11 or later and with patch 144568–02 or later. (included in preinstall image)
- Oracle Enterprise Linux 5.6 (64–bit)
- Oracle Enterprise Linux 5.7 (64–bit)
- Oracle Enterprise Linux 6 (64–bit)
- Oracle Enterprise Linux 6.1 (64–bit)
- Oracle Unbreakable Linux 5.6 (64–bit)
- Oracle VM server 2.2.1
- Oracle VM server 2.2.2
- Oracle VM server 3.0
- RHEL 5.6 (64–bit)
- RHEL 5.7 (64–bit)
- RHEL 6 (64–bit)
- RHEL 6.1 (64–bit)
- SUSE Enterprise Linux (SLES) 11 SP1 (64–bit)
- Windows Server 2008 R2/SP1 (64–bit)
- VMware ESX/ESXi 4.1 update 1
- VMware ESX/ESXi 5.0
Racks
- Support up to 8 G5s in Sun Rack II (redwood) 1042/1242.
- Supported in any racks meeting the below criteria:
- Four-post rack (mounting at both front and rear), Not supported in two-post racks
- Rack must have 5RU space available
- Rack should have a horizontal opening and unit vertical pitch conforming to ANSI/EIA 310-D-1992 or IEC 60927 standards
- Distance between front and rear mounting planes between approximately 26 and 34.5 inches (660.4mm and 876.3mm)
- Minimum clearance depth (to front cabinet door) in front of front rack mounting plane: 1 inch (25.4mm).
- Minimum clearance depth (to rear cabinet door) behind front rack mounting plane: 27.5 inches (700mm).
- Minimum clearance width (between structural supports and cable troughs) between front and rear mounting planes: 18 inches (456 mm).
Links
Attachments
This solution has no attachment
|