New in our Portfolio: HGST ActiveScale Systems

As official HGST Enterprise partner, Chip ICT has adopted the ActiveScale™ portfolio.

ActiveScale P100

ActiveScale Cloud Management

ActiveScale X100

Modular Object Storage System Scaling Up From 720TB To Over 19PB, So You Can Keep Up With Data Growth, Storage Management at Petabyte Scale, And Deliver On Business Objectives Monitoring And Storage Analytics For ActiveScale Systems Via A Cloud Interface Lets You Get Ahead of Events Before They Become Problems Integrated Object Storage System Scaling From 840TB To Over 52PB, Delivering Outstanding Data Consistency, Availability, And Durability For Building World-Class Cloud Infrastructure

NVMe-oF™ (NVM Express™ over Fabrics)

NVM Express (NVMe™) – Non-Volatile Memory Express is a communications interface/protocol developed specially for SSDs. Designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage, NVMe reduces latency and provide faster CPU to data storage device performance.

Itis a new and innovative method of accessing storage media and has been capturing the imagination of data center professionals worldwide. NVMe is an alternative to the Small Computer System Interface (SCSI) standard for connecting and transferring data between a host and a peripheral target storage device or system. SCSI became a standard in 1986, when hard disk drives (HDDs) and tape were the primary storage media. NVMe is designed for use with faster media, such as solid-state drives (SSDs) and post-flash memory-based technologies. NVMe provides a streamlined register interface and command set to reduce the I/O stack’s CPU overhead by directly accessing the PCIe bus. Benefits of NVMe-based storage drives include lower latency, deep parallelism and higher performance.

What is NVME over Fabrics (NVMe-oF™)?

NVMe over Fabrics (NVMe-oF™) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF™ defines a common architecture that supports a range of storage networking fabrics for NVMe block storage protocol over a storage networking fabric. This includes enabling a front-side interface into storage systems, scaling out to large numbers of NVMe devices and extending the distance within a datacenter over which NVMe devices and NVMe subsystems can be accessed.

NVMe-oF™ is designed to extend NVMe onto fabrics such as Ethernet, Fibre Channel, and InfiniBand.

New: Intel Xeon W Processor Family

After the succesfull launch of the Intel Scalable Processor Series earlier this year, Intel now has unveiled their latest Processor Family: Intel Xeon W.

This New CPU series is designed for Single Socket Professional Workstations.

The new Xeon W family is not based on the Xeon Scalable Processor design.

The Xeon W range’s flagship model is the Xeon W-2195 which includes 18 cores and 36 threads (18c/32t) running at a 2.3GHz base frequency and 4.3GHz turbo frequency, 24.75MB of unified cache, and a 140W thermal design profile (TDP).

The entry point, meanwhile, is the Xeon W-2123, which is a four-core eight-thread (4c/8t) part running at 3.6GHz base and 3.9GHz turbo with 8.25MB of cache in a 120W TDP.

From there the models go to 6c/12t, 8c/16t, and 10c/20t.

Each supports up to 512GB of DDR4 memory across four memory channels with ECC support, includes 48 PCI Express lanes

 

 

 

 

 

 

 

 

 

 

 

 

GPU Computing, the basics:

GPU-accelerated computing is the use of a Graphics Processing Unit (GPU) together with a CPU to accelerate deep learning, analytics, and engineering applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient data centers in government labs, universities, enterprises, and small-and-medium businesses around the world. They play a huge role in accelerating applications in platforms ranging from artificial intelligence to cars, drones, and robots.

HOW GPU’s ACCELERATE SOFTWARE APPLICATIONS

GPU-accelerated computing offloads compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run much faster.

How GPU Acceleration Works

GPU versus CPU Performance

A simple way to understand the difference between a GPU and a CPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

 GPUs have thousands of cores to process parallel workloads efficiently
GPU versus GPU: Which is better?

Check out the video clip below for an entertaining GPU versus CPU comparisson


Video: Mythbusters Demo: GPU vs CPU (01:34)

Intel® Purley Platform

The new Intel® Purley Platform, with new 14nm microarchitecture codename: Skylake

Advanced Features Are Designed into the Silicon

Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimize interconnectivity with a focus on speed without compromising data security. Here are just a few of the value-added features:

Pervasive, Breakthrough Performance

The new microarchitecture provides increased per-core and socket-level performance.

Accelerate Compute-Intensive Workloads

Intel® AVX-512 can accelerate performance for workloads such as video and image analytics, cryptography, and signal processing.

Integrated Intel® Ethernet with iWARP RDMA

Speeds of up to 4x10GbE with iWARP RDMA provides low-latency, high-throughput data communication integrated in the chipset.

Integrated Intel® QuickAssist Technology (Intel® QAT)

Data compression and cryptography acceleration, frees the host processor, and enhances data transport and protection across server, storage, network, and VM migration. Integrated in the chipset.

Integrated Intel® Omni-Path Architecture (Intel® OPA)

End-to-end high-bandwidth, low-latency fabric optimizes performance and HPC cluster deployment by eliminating the need for a discrete host fabric interface card.

Protecting the Future of Data

15-year product availability and 10-year use-case reliability helps protect your investment.

Other major enhancements:

  • Memory Technology: memory bandwith increase by factor 1.5, 6xDDR4 channels, 2133, 2400, 2666 MT/s, RDIMM, LRDIMM, Apache Pass
  • New CPU socket > Socket P
  • Scalable from 2 sockets – 4 sockets -8 sockets
  • Thermal Design Power (TDP) range from 70Watt up to 205Watt
  • New UltraPath Interconnect (UPI) 2-3 channel per processor up to 10.4 GigaTransfer per second
  • 48 PCIe lanes per processor with Bifurcation Support X16, X8, X4
    PCI Express uses serial communication and creates a drastically different scheme. Serial communication for PCI Express means setting up dedicated channels to each device in the system. Splitting the total amount of PCI Express channels into subgroups for specific PCI Express devices. This creation of PCI Express subgroups is PCI Express Bifurcation
  • PCH (Platform Controll Hub) Codename Lewisburg: DMI3  x4Chipset bus
  • Power Management: Per Core P-State (PCPS) – Uncore Frequency Scaling (UFS) – Energy Efficient Turbo (EET) – On die PMAX detection (NEW) – HW Controlled P-State (HWP) (NEW)
  • Rebalanced Cache Hierarchy: Increased MLC, 1.375 MB Last Level Cache/Core

 

Intel® Cache Acceleration Software

Improve Application Performance with Intel® Cache Acceleration Software (Intel® CAS)

Today’s data centers are held back by storage I/O that cannot keep up with ever-increasing demand, preventing systems from reaching their full performance potential. Traditional solutions, such as increasing storage, servers, or memory, add huge expense and complexity.

Intel® Cache Acceleration Software (Intel® CAS), combined with high-performance Solid State Drives (SSDs), increases data center performance via intelligent caching rather than extreme spending. Intel® CAS interoperates with server memory to create a multilevel cache that optimizes the use of system memory and automatically determines the best cache level for active data, allowing applications to perform even faster than running fully on flash/SSDs.1