Production Best Practices

In production various considerations are required for Ampool system to smoothly operate, scale and perform. This document highlights various best practices guidelines.

Using right type of tables

Typically different Ampool Table types are used to store data with different access characteristics. The memory requirements to store given amount of data also vary based on the table type.

  • MTable: Mutable in-memory table that support both range and hash partitioning schemes based on the row key. Hash partitioned tables are "unordered" tables which provide balanced distribution of data across multiple table partitions/buckets and suitable for low latency get/put(update) workloads. Range partitioned tables are "ordered" tables which provide ordering of table rows based on the row key and they are also suitable for range/scan queries based on the row key in addition to get/put(update) workloads. This table supports simple and complex (List, Map, Struct) data types for the column values.

In data warehousing context typically these tables are used as mutable dimension tables.

  • FTable: Immutable in-memory table. Data ingested in this table is immutable, hash partitioned on a specified table column and internally ordered based on ingestion time. Ingestion time is made available as additional column in the table for user to filter and efficiently retrieve the data over a given time range avoiding full table scan. This table only supports "Append" and "Scan" operations. Being immutable records, this table internally stores multiple records in a single block and thus compared to MTable, it lowers the per record memory overhead incurred due to internal housekeeping metadata. FTable does not require user to specify a row key as it does not support get/put operations.

FTable has a concept of data tiers i.e. when amount of data stored in Ampool crosses heap eviction threshold for any Ampool server, data is evicted to next tier of storage to free up the memory space. Ampool supports multiple tiers of storage where data is stored in ORC format. Typical recommended configuration of FTable should include memory (DRAM) as tier-0, local disks (preferably SSDs) attached to individual Ampool server nodes as tier-1 and a shared archive file system or object store e.g. HDFS/S3/HBase as tier-2. Evicted data on disk and archive tier is a seamless extension of data stored in memory and Ampool serves it transparently irrespective of which tier data resides on.

In a data warehousing context typically FTable is used as Facts table to store the immutable stream of events.

  • Geode's K/V Region: Ampool extends Geode platform and hence also supports all the K/V region types. This is suitable for OLTP and web applications to store & retrieve stateful java objects e.g. user sessions into Ampool. Also K/V regions are suitable for storing the JSON documents natively and having ability to index any field for quick search and retrieval.

Replicated vs Partitioned datasets

Apache Geode supports both replicated and partitioned regions while Ampool currently (as of release 1.3) supports only "partitioned" strategy for distributing the data for its MTable and Ftable types. It supports only hash partitioning for Ftable and both hash and range partitioning scheme for MTable.

Typically smaller datasets that are less frequently updated are stored in replicated regions, primarily as a cache on all Ampool nodes. All the data in replicated regions will be present on all Ampool nodes. In replicated regions replicas on individual member are updated independently and if updates to a record w/ same row key happens on two different replicas, eventually data is made consistent through conflict detection.

Partitioned regions or Ampool tables are used to store large datasets. In MTable row key is used for partitioning while in FTable any table column can be used as partitioning key. Choosing right partitioning key is important to equally distribute and load balance the data across multiple partitions/buckets.

Default number of partitions are 113 for Ampool table or Geode K/V region. We recommend typically 10 partitions per table/region per Ampool member server. Although one thing to make a note that Ampool does not support dynamic splitting of the buckets and hence when choosing number of buckets for each region we should consider the cluster size in near future if you have any expansion plan for the cluster.

Note

Ampool provides import/export utility that can export the data out of Ampool tables and then can be imported back after recreating tables with more number of partitions). See Geode Documentation on how to configure number of buckets for Apache Geode for partitioned K/V regions and also how to calculate number of buckets which also applies to Ampool Tables.

Partitioned regions/tables should also be configured with non-zero redundancy factor of 2 or more for high availability and improved data locality. See Geode Documentation to understand high availability for partitioned regions which also applies to Ampool Tables.

Note

Ampool as of release 1.3 does not support Fixed partitioning for Ampool Tables. Also during Ampool table creation there is an option to specify number of buckets for a table.

Considerations for Client's Single-Hop access to Server side partitioned regions

Single hop access is enabled by default but usually it works well with smaller size of cluster. In single-hop access the client connects to all servers, so more connections are generally used and thus limit the scaling of the cluster. Single-hop gives the biggest benefits when data access is well balanced across your servers. When single hop access is disabled, the client uses whatever server connection is available, the same as with all other operations. The server that receives the request determines the data location and contacts the host, which might be a different server. So more multiple-hop requests are made to the server system.

See the Geode documentation for more details to determine if you need single-hop client access.

Design of Entry/Row key for Ampool's MTable

Row key in Ampool MTable is stored as a byte array. In case of Unordered MTable, row key bytes are hashed to distribute entries into multiple buckets/partitions. In case of Ordered MTable, row keys are sorted and range partitioned.

Note

In Ampool's FTable, there is no concept of row key as it does not support indexed lookups/updates.

It is good to have row key as small as reasonable e.g. 8 byte unsigned long key can store large number of records more efficiently (~ 3x less space) compared to using string representation of key for the record numbers.

Also storing the monotonically increasing row keys (e.g. timestamps as row keys) for time series data in Ordered MTable would typically direct all the rows to single range partition and thus lower the write throughput. To allow parallel ingestion of data into multiple buckets it would be good to prefix the timestamp with entity id. Assuming entities inserting the data at the same time, ingestion would be spread over multiple buckets. Consult w/ Ampool support for more clarifications on how to achieve this.

Colocation of compute frameworks with Ampool

Ampool support various compute frameworks to access the data from Ampool and perform the batch or streaming computations e.g. Spark, Hive, Apex etc. It is recommended to colocate the workers from these compute frameworks with Ampool member servers. This would typically avoid the access to data in Ampool over the network and thus get the maximum benefit of in-memory low latency data access characteristics of Ampool. Ampool exposes the data locality information in terms of which table buckets/partitions are residing on which servers as well as their size information etc. This information is used by the Spark/Hive connectors to colocate, as much possible, their processing tasks on the same nodes where data resides.

Tiered Storage for FTable

Ampool supports tiered storage configuration for FTable. It is recommended to have these storage tiers configured with their gradual increased order of latency. So we recommend to have tier-0 as DRAM layer, next tier-1 as SSD based local disk store and then if need be the third tier should be a shared filesystem or object store e.g. HDFS, S3 etc. This gradual tiering of storage helps Ampool retrieve and expire the data from/to next tier efficiently.

Ampool/Geode Network configuration best practices

Ampool extends Apache Geode and hence many guidelines and configuration best practices described in Geode documentation are also applicable to Ampool cluster and it's new data storage abstractions, MTable and FTable. Ampool currently support the Cliet/Server topolgy, hence except peer-to-peer topology rest of the guidelines apply well to Ampool's networking configuration. The Networking Best Practices link from Geode wiki documentation is provided here for more details.

Ampool Cluster Configuration & Sizing Guidelines

Smilar to Network configuration guidelines, Geode's Cluster Sizing guidelines are also applicable to Ampool Cluster. Ampool also provides a sizing worksheet to determine how much memory is required for storing given size of data in the memory tier of Ampool's MTable and FTable.