What is better? Unsafe, ByteBuffer or Direct ByteBuffer?

posted in: Projects | 0




Summary

Unsafe memory allocation is about 10-12% faster than ByteBuffer and Direct ByteBuffer. Since it lives out side of JVM heap, application needs to manage its allocation and deallocation.

There are couple of JAVA runtime options to consider as well which might help boost performance of ByteBuffer.

Conclusion is, consider all 3 options listed below and see which option is better for your specific use case. We are working on open sourcing a wrapper library which will wrap Unsafe buffer to java ByteBuffer so that they can used interchangeably in the code.

In WonderDB, we are going to support all 3 options. Since most of the our cache memory is allocated at the beginning and destroyed at the end, we really have no issues with application requiring to manage memory in case of Unsafe memory allocation.

Our Use Case

WonderDB is a NoSql transactional database built on top of relational database architecture and design concepts. Our caches are fully disk backed and we manage life cycle of data buffers by paging in and out of disk files. Currently we have built our caching engine on top of direct ByteBuffer and now in the process of evaluating other approaches for performance improvements.

Different ways to store bytes in to memory

We are considering following options:

  • ByteBuffer
  • Direct ByteBuffer
  • Unsafe

Most of access to cache in WonderDB is to read and write serialized byte arrays.

I found detailed performance analysis of all above three methods in Alexey’s blog. Unsafe read/write performance for byte arrays is about 10-12% better than ByteBuffer or Direct ByteBuffer.

Even though Unsafe memory allocation is faster, we believe, there are couple of JVM runtime options which should help ByteBuffer allocation.

JVM runtime options to consider

-XX:+UseLargePages

Most of the major operating systems support LargeMemory page settings. This setting improves performance due to following reasons.

  1. increased performance through increased Translation Lookaside Buffer (TLB) hits
  2. pages are locked in memory and are never swapped out which will guarantee whole JVM heap remains in RAM. Same guarantee could not be given for Direct ByteBuffer memory.
  3. contiguous pages are pre-allocated and cannot be used for anything else but for System V shared memory, for example JVM heap.
  4. less bookkeeping work for the kernel in that part of virtual memory due to larger page sizes

Only ByteBuffer can take advantage of this jvm setting. It does not help direct ByteBuffer or Unsafe buffers since they are are allocated off heap and out of control of JVM.

More information about LargeMemory pages can be read here in redhat site.

Please note that LargeMemory pages need to be enabled on the machine before using this option. Please click Oracle site here to understand how to enable Large pages in different operating systems including Linux, Windows and Solaris.

The way this option can be used is, set the size of Large Pages little bigger than the size of java heap size you are considering. This way, whole java heap will be pinned to the memory and os wont page in and out its contents.

-XX:+CompressedOops java option

This option allows pointer compression in 64bit JVM to reduce the heap.

 

Follow vathavale:

Implementor of WonderDB. A transactional NoSql clustered database.