Customer & Sales Support
Our executive will get in touch with you.
A service developed by University of Michigan researchers called Infiniswap made this technology—called "memory disaggregation"—feasible in 2017, but it still suffered from several latency overheads that made real-world adoption unlikely. Now, a new system from the same lab called Leap improves upon this and other disaggregation solutions by applying a technique called prefetching to remote memory environments. Leap earned a Best Paper Award at the 2020 USENIX Annual Technical Conference. The main problem facing practical memory disaggregation, according to Leap co-author Hasan Al Maruf, was a speed difference between local and remote memory access.
"When you access memory locally," says Maruf, a Ph.D. student in U-M's Computer Science and Engineering division, "memory runs at the nanosecond level. But when you go to the network, things run at the microsecond level." It takes an average of four or five microseconds to load a page of data from memory remotely (a page is the smallest unit of data the processor typically fetches from memory at a time). In the worst case it can take up to 40-50 microseconds. This might not sound like much, but the order of magnitude delay in moving from nanoseconds to microseconds represents a major performance drop—about the same as comparing RAM with a solid state drive, or solid state with a hard disk drive. The tradeoff was too great, and existing methods proved favorable to remote memory access.
"Our measurements show that an average 4KB remote page access takes close to 40 microseconds in state-of-the-art memory disaggregation systems like Infiniswap," write Maruf and co-author Prof. Mosharaf Chowdhury. "Such high access latency significantly affects performance because memory-intensive applications can tolerate at most single microsecond latency." This issue was compounded further by overheads introduced by the operating system. The data path used by operating systems in these remote settings to access an address in memory was originally designed for interactions with local disks, which operate at the even slower millisecond level. These data paths come bundled with features meant to hide this disk access time that aren't practical for data center networks and end up bogging everything down.
It was Maruf and Chowdhury's goal with Leap to "hide" these sources of latency with two tricks: prefetching pages wherever possible, and using more efficient data paths that allow them to discard the operating system's irrelevant disk-access features. Their goal with the prefetcher is to reduce the number of calls to remote memory in the critical path needed by the program, keeping useful data close at hand on the local machine running the program. This technique is now commonplace in single-machine settings where the most common performance bottleneck is making calls to the hard disk. There, systems try to smartly identify additional instructions in a sequence that might be useful soon and pull them into the processor's much-faster cache. Emulating this technique becomes difficult in remote memory settings because related data might not be stored sequentially—in fact, it might be stored randomly all across the data center's available memory. Existing attempts to solve this problem identify patterns in how memory is accessed in real time, or keep track of the program's memory access footprint on the entire virtual memory address space.
Both of these techniques require a lot of CPU usage and additional memory overhead. Instead, the authors only approximate patterns in a program's memory access. This allows Leap to identify the most clear opportunities to prefetch additional data without having to take up resources being more precise—only those cases where the program accesses the same addresses in memory repeatedly, the so-called majority access pattern, will be identified. Combined with using simplified data paths that are recorded for each address the program accesses in remote memory, the prefetcher allows nearly all applications to run as if they were working with local memory.
"This prefetching solution helps to hide the network latency, and the data path makes sure the operating system has no overhead," Maruf says. Using Leap to fix the system's data paths provided single-microsecond latency at worst in 95% of tasks, with an average of five or six microseconds. Including the prefetcher in their tests, Leap provided sub-microsecond latency, or latency in nanoseconds, on 85% of tasks. These speeds provide a big boost to the future of memory disaggregation in data centers in the long term. In the short term, Leap can provide latency benefits when reading pages from storage devices other than remote memory, like traditional disks and solid state drives.
Get tailored solutions to support security operations across the digital enterprise, while monitoring and responding to the evolving threat landscape.
Deploy a variety of identity solutions, from provisioning and access governance to strong authentication and public key infrastructure.
Protect critical data and meet constantly increasing privacy requirements such as the General Data Protection Regulation (GDPR).
Meet unique security requirements through design, installation and integration of perimeter, network, endpoint and advanced threat-protection solutions.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now Intrisus and AWS commonly known as cloud computing partners. One of the key benefits of this partnership is to replace up-front capital infrastructure expenses with low variable costs that scale with your business.
Intrisus Labs and Cisco Webex products provide core collaboration capabilities, including video meetings, team messaging and file sharing. The suite is considered a leading collaboration platform in the unified communications and is geared toward small group collaboration as well as enterprise-wide deployments.
HPE is a global, edge-to-cloud Platform-as-a-Service company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
Intrisus corporation is the exclusive partner for APAC for Fireeye. FireEye is a publicly traded cybersecurity company headquartered in Milpitas, California.It has been involved in the detection and prevention of major cyber attacks. It provides hardware, software, and services to investigate cybersecurity attacks, protect against malicious software, and analyze IT security risks.
Sophos evolves to meet every new challenge, protecting nearly 400,000 organizations of all sizes in more than 150 countries from today’s most advanced cyber threats. We now bring that same always-evolving belief to home users. At Sophos, we want to make sure the same level of protection we provide businesses is available to everyone.