arch

At the core, it provides persistent BTree+ and linked lists of variable size blocks and memory/buffer management. With this basic infrastructure, it can be customized to store any type in the database with indexing support for fast data retrieval. First use case we are implementing is cache.

Record level locking and variable buffer management are very useful for performance and throughput. With variable size blocks, tables with bigger record sizes can be stored in data files with bigger block sizes where as indexes or small tables could be stored in smaller block size data files. This way record/index can be fetched in to memory cache with one disk io.

We have seen it scale linearly with queries per second per node up to 33000 per second with Amazon EC2 xlarge node (4 CPU, 13 GB RAM and 2, 40GB SSDs with 3000 IPOs).

Buffer Cache Manager

There are 2 buffer caches in the system. One which sits in the JVM heap (By default it is allocated about 30% of heap size) and other uses java direct buffers outside of JVM.

Buffer cache is a simple fixed bucket hash map of objects or blocks. Buffer cache in JVM heap stores Record or index objects where as buffer cache in direct memory stores block of bytes,

On cache miss, JVM buffer cache gets data from direct memory buffer cache. Direct memory buffer cache on cache miss brings buffer from disk. Until buffer is brought to jvm buffer cache, requesting thread blocks. This thread is most probably the thread executing the query.

It modifies data in both the buffers for the queries which modify buffer contents (Insert, Update and Delete queries).

Buffers which are accessed during query processing (any CRUD query) are pinned to the memory by inserting their pointers (location in disk) in ConcurrentLinkedQueue so that eviction or writer threads don’t evict or sync back to disk these buffers (or objects) as these buffers/objects are most probably locked during query processing.

Cache Evictor

There are two evictor threads in the system.

Normal JVM heap evictor thread

This thread starts evicting JVM heap buffer cache when it reaches high watermark. This thread starts evicting least used buffers which are not pinned (being accessed at the time). Low watermark by default is set to 90% of cache size. And high watermark is by default set to 95%. It evicts objects until it reaches low watermark.

Normal Direct memory evictor thread

When cache reaches high watermark, eviction thread starts evicting least used buffers which are not pinned and dirty (dirty buffers are the buffers which have changed recently and not yet synced to disk). Low watermark by default is set to 90% of cache size.And high watermark is by default set to 95%. It evicts objects until it reaches low watermark.

Cache Writer

This can be set in to modes;

Aggressive write mode

As soon as buffer gets dirty (may be due to insert, update, delete query), it is written back to disk. This setting is useful when log writer is not enabled.

Timed write mode

In this mode, main writer thread wakes up every so often (3 seconds by default) and writes all dirty buffers which have changed before the writer thread was started.

During the scan, for every dirty buffer, it picks up a new thread from the pool for writing to disk. If thread is not available in thread pool, it waits until a thread is available. Thus writer is implemented as multi-threaded which is very useful when a machine has lot of spindles.

Data structures

Both BTree+ and Linked list support persistent store so that if any element/block is not available in the memory, it can go and get it from the disk.

How Queries are executed

set

  1. First it creates Unique index objects from insert parameters and puts if absent in global ConcurrentHashMap. This step is required to make sure no other query is inserting or updating same Unique indexes if it is, then UKViolationException is thrown.
  2. Then it checks every unique BTree+ index tree to make sure these indexes are not already present in the BTree. If it does’ it throws UKViolationException.
  3. It inserts this record in Collection/Table linked list.
  4. Inserts all indexes in to various BTree+ trees.
  5. Removes unique index objects from ConcurrentHashMap in step 1 and returns.

get

  1. Select query can select more than one record based on where filter.
  2. Based on the filter columns, it checks if it needs to take Index scan or full table scan.
  3. If its full table scan, it positions Full table scan iterator on head of collection/table linked list.
  4. If its indexed scan, it first positions Index scan iterator into proper LeafIndexBlock. From this is where it will start scanning the index list.
  5. For each record in the iterator, it applies the filter and generates the list of record pointers in the result set. For index scan if filter returns true, it selects that record, else it stops scanning. For Full table scan it scans whole link list.
  6. Once it has list of record pointers, it goes selects every record and returns based on select column list if that record still passes filter.

set (Update)

  1. Update query can update more than one record.
  2. So first it selects records which need to be updated executing steps 1-5 in select query.
  3. For each record in the select list, it updates the record if that record still passes filter.

remove

  1. Similar to update query, delete query first selects records which need to be deleted.
  2. Then deleted every record if that record still passes filter.

Locking

Locking Tree

  1. Whole tree is locked for reads when tree is being searched. Once record/block is located, this lock is removed.
  2. Once leaf node/index entry is located, it puts read or write lock on index leaf block based on type of query. For reads it puts read locks. For writes, it puts exclusive write lock on index leaf block.
  3. If block needs to split, it sets the splitRequired = true for that block. Removes the write lock and then tries to acquire exclusive write lock on the tree and then goes through the BTree split motions. During this any number of blocks might need to split. But since Tree is locked exclusively, other threads wont get affected. Also, while during the time between lock on leaf block is removed and exclusive lock on tree is acquired, if any other thread tries to update that leaf block, it will get SplitRequiredException.
  4. Current implementation sets splitRequired=true flag for the index leaf block even if all index entries in that block are removed. And then gets exclusive lock on the tree and removes that block and may be many more blocks from the tree. Technically this can be done separately without blocking query thread. Something I need to change in the future.

Locking Link List

  1. Specific block in linked list is read or write locked for every CRUD operation on the record in that block.
  2. During table scans, when it needs to move to next block, it first locks next block and then removes lock on the current block. So for some time, 2 consecutive blocks in the list will be blocked. Due to this, list could be scanned in forward direction only.

Extending the Link List

  1. Extending link list tail is carried out by separate asynchronous thread. Every time it keeps extending the list by configurable number (5 blocks by default) of blocks. This way, whole list doesn’t get locked until schema metadata for that collection/table is updated.

This scheme of extending link list is required so that inserts wont become serialized to single threaded when tail needs to extend during inserts.

Leave a Reply

Your email address will not be published. Required fields are marked *