Ampool ADS v2.0.0 improves prior releases, by improving both functionality and performance.
What's New in 2.0.0¶
Following are the highlights of this release:
- FTable Delta update support.
- FTable scan optimization - Store column statistics per block to skip unwanted blocks during scan using filters.
- Ampool Security module : Added support for sentry and LDAP authorization.
- Server scan performance improvements.
Interfaces & APIs
- CLI (MASH): New Mash command to show table distribution on nodes.
Besides, several internal enhancements were made to improve performance for insert, update & scan operations, and multiple usability fixes.
Following packages are available for download from Ampool's (S3) website:
Ampool Base Package (ampool-2.0.0.tar.gz): Includes Ampool core (MTable, FTable, CoProcessors, Local DiskStore for recovery, etc) and core interfaces (MASH & Java API)
Ampool Compute/ Ingest Connectors:
Spark (ampool-2.0.0-spark_1.6.tar.gz and ampool-2.0.0-spark_2.1.tar.gz )
Installing Ampool v2.0.0¶
Core Ampool Server & Locator¶
- Untar the binaries in a new installation directory. After extracting the contents from the package, you should see the following directory structure:
bin config examples javadoc lib tools lib-ext-dependencies lib-security lib-tier
- For launching ampool services, start the command-line utility MASH (Memory Analytics Shell) by typing the following from the installed ampool directory (
$ <ampool-home>/bin/mash mash>
Type 'help' for a list of commands.
For a detailed explanation of ampool services and commands, please refer to the README within the main directory.
Untar the Ampool connector packages on the Ampool client nodes (ampool-2.0.0-spark_2.1.tar.gz, ampool-hive-2.0.0.tar.gz, ampool-connect-kafka-2.0.0.tar.gz)
Refer to README in the respective packages to install and use the Ampool connectors with Spark and Hive.
Upgrading to Ampool ADS v2.0.0¶
Core Ampool Server & Locator¶
- Upgrade from older version of Ampool ADS is not supported in v2.0.0.
- Untar the newer version of connector packages and use them in place of previous version in the classpath when using Spark, Hive and Kafka with newer version of Ampool
Resolved (Major) Issues¶
|GEN-2048||Provide functionality of deleting all the versions of all the keys qualified by given filter.|
|GEN-1804||Creating Table using schema with lowercase Types fails.|
|GEN-1989||Introduce a configuration parameter to enable changes with Synchronous Replication.|
|GEN-2113||MTable.truncateMTable(tableName) throws MCheckOperationFailException with message - "Delete with given timestamp failed"|
|GEN-2109||truncateFTable causes Scan with Timestamp Filter to fail.|
|GEN-2058||FTable Recovery: The row count of FTable is not the same after cluster restart.|
Known Issues & Limitations¶
|Issue Ref||Description||Workaround (if any)|
|GEN-1759||Inserted key not found in Mtable during delete. In this scenario, secondary throws RowKeyDoesNotExistException (which gets propagated to client), but at the same time, entry also gets deleted so subsequent Get or Scan reports no data for the key.||Ignore|
|GEN-2261||Row count mismatch in FTable if eviction and recovery are happening simultaneously.||None|
|GEN-2250||Row count mismatch in FTable when a new server joins the distributed system||None|
Versions & Compatibility¶
This distribution is based on Apache Geode release (1.0.0-incubating.M3). Following table summarizes the minimum versions supported for different connectors:
|Apache Spark||2.1.0, 1.6.0|
|Apache Hive||0.14.0, 1.2.1|
|Apache Kafka||0.10.0.1/confluent-3.0.0 or confluent-3.2.0|
- Code examples: A set of code samples showcasing table and coprocessor API can be found under