The advent of many-core processors with a greatly reduced amount of per-core memory has shifted the bottleneck in computing from flops to memory. A new, complex memory/storage hierarchy is emerging, with persistent memories offering greatly expanded capacity, and augmented by DRAM/SRAM cache and scratchpads to mitigate latency.

Overview of Memory-centric Architectures

As shown above, non-volatile random access memory (NVRAM), Resistive RAM (RRAM), or Phase Change Memory (PCM) may be memory or I/O bus attached, and may utilize DRAM buffers to improve latency and reduce wear.

Our research program focuses on transforming the memory-storage interface with three complementary approaches:

  • Active memory and storage in which processing is shared between CPU and in-memory/storage controllers,
  • Efficient software cache and scratchpad management, enabling memory-mapped access to large, local persistent stores,
  • Algorithms and applications that provide a latency-tolerant, throughput-driven, massively concurrent computation model.

Our recent projects are described on our Activities page.