WonderDB is transactional multi-model newSql database implemented based on relational database architecture like record locking, blocks, caching, buffer management, disk backed Hash and Btree+ indexes.

Every layer, starting from query processing, buffer management, both hash and btree+ indexes and cache writers are all multi threaded and never acquire global database or table level locks.

Due to its highly scalable architecture it can scale linearly with the load. Observed queries per second on Amazon EC2 m3.xlarge vm(4 CPU, 13 GB RAM and 2 40GB SSDs) is 60K per second. Here are high level architecture details. We also support variable size blocks so that different sizes of records can be serialized to disks in minimum io which also help in increasing throughput.

Please read our Blog for more about our architecture. We are coming up with innovative way to implement document and name/value pair databases into one engine.

Our sharding and replication architecture is based on very different concepts which will make them more scalable and configurable.

Stay tuned for more information as we start writing and open sourcing these features.

WonderDB is built on top of collection of features which we are open sourcing in very near future. Here is roadmap of features we will be open sourcing or already have open sourced.

Open source products

Single Node NoSql database

NoSql database with support for relational, document (xml, json) and name/value pair. It will also support custom datatypes and functions.

Clustering with sharding and replication is coming soon.

WonderDB Cache

It can be configured as a disk backed java application cache configured to use hash index or BTree+ index based on the query needs. This can be used when cache footprint is growing bigger than physical memory. Note that, It extends to memory outside of JVM heap thorough use of direct byte buffers so you dont have to increase JVM heap size after integrating with this cache. For more information on its usage and configuration, please read Getting Started

Resource Journaling / Redo Log Manager

This product provides transactional commit and rollback semantic to any serializable resource. It stores serialized resource to look ahead redo log files only on commits. And also periodically (configurable) serializes the resource to its destination in multiple threads for better performance.

In WonderDB, this component is used as redo log file/transaction manager for disk blocks. Multiple disk blocks containing multiple database resources like records and indexes can be committed or rolled back and a single unit.

For more information on its usage and configuration, please read Getting Started

Product pipeline – Coming Soon

Implementation of select classes in java.util.concurrent package

This product provides disk backed implementation of WonderDB’s hash index and BTree+ index. They will be available as implementation of some of the Map and list related concurrent util classes. This implementation also uses external memory outside of JVM heap. If your application has a need to store large lists or large object sorting, then this may be very useful feature for you. We have seen 40K TPS for Btree+ and 60K+ TPS for hash index accesses.

Single Node transactional NoSql database

NoSql database with support for relational, document (xml, json) and name/value pair. It will also support custom datatypes and functions.

Sharded database with replication and single node transactional support

We are considering use of ZooKeeper and Kafka for replication. All insert, update and delete transactions will be sent on kafka bus to be picked up by replicas.

We are considering kafka to offload replication data over to kafka servers so that database nodes don’t have to take on replication responsibilities.

We would really like to know your thoughts on this way of using kafka.