Ampool ADS v1.5.0 improves prior releases, by improving both functionality and performance.
What's New in 1.5.0¶
Following are the highlights of this release:
- Row versioning feature, which was available for Ordered MTables, is now also available for Unordered MTables.
- Ampool server works with secured HDFS using Kerberos authentication.
- Persistent tier-store for FTable optionally supports Parquet file format in addition to existing support for ORC format.
- WAN Replication works with MTables.
- FTables can mutate with data manage permissions. This is provided to address rare situations where data change is required.
- Delete API to support delete for versions qualified by filters for MTables
Interfaces & APIs
- CLI (MASH): Added support to specify local-max-memory for a table as percentage of total heap. For a complete list, please refer to Table Command Reference
- Ordered MTables can be accessed through Spark and Hive. With right choice of settings, each version appears as a separate row in query output.
- Hive and Spark connectors work in secured environment using Kerberos authentication.
Besides, several internal enhancements were made to improve performance for insert, update & scan operations, and multiple usability fixes.
Following packages are available for download from Ampool's (S3) website:
Ampool Base Package (ampool-1.5.0.tar.gz): Includes Ampool core (MTable, FTable, CoProcessors, Local DiskStore for recovery, etc) and core interfaces (MASH & Java API)
Ampool Compute/ Ingest Connectors:
Spark (ampool-1.5.0-spark_1.6.tar.gz and ampool-1.5.0-spark_2.1.tar.gz )
Installing Ampool v1.5.0¶
Core Ampool Server & Locator¶
- Untar the binaries in a new installation directory. After extracting the contents from the package, you should see the following directory structure:
bin config docs examples lib tools
- For launching ampool services, start the command-line utility MASH (Memory Analytics Shell) by typing the following from the installed ampool directory (
$ <ampool-home>/bin/mash mash>
Type 'help' for a list of commands.
For a detailed explanation of ampool services and commands, please refer to the README within the main directory.
Untar the Ampool connector packages on the Ampool client nodes (ampool-1.5.0-spark_2.1.tar.gz, ampool-hive-1.5.0.tar.gz, ampool-connect-kafka-1.5.0.tar.gz)
Refer to README in the respective packages to install and use the Ampool connectors with Spark and Hive.
Upgrading to Ampool ADS v1.5.0¶
Core Ampool Server & Locator¶
Stop the existing Ampool Server(s) and Locator(s) using Mash CLI
Untar the new Ampool core package and start the Ampool Server and Locators using new binaries.
Make sure provide the previous version's server and locator directories using --dir option in Mash CLI when starting the Server and Locators.
- Untar the newer version of connector packages and use them in place of previous version in the classpath when using Spark, Hive and Kafka with newer version of Ampool
Resolved (Major) Issues¶
|GEN-1967||Fix for wrong behavior of preput() co-processor.|
|GEN-1974||Java Client with Kerberos Credentials could not connect with Kerberized ampool cluster.|
|GEN-1831||Fixed mash output for query on FTable.|
Known Issues & Limitations¶
|Issue Ref||Description||Workaround (if any)|
|GEN-1161||On local developer machines, if ampool services are started without specifying the host, it may bind to wifi address, which changes with moving location. In such scenarios, reconnecting to the locator from MASH fails.||Manually kill ampool (locator and server) processes and restart the services. Alternatively, specify localhost or stable network interface to bind these services.|
|Limitation||Coprocessor can not be called on an empty table.||Endpoint coprocessor execution is not supported on empty table, To check for empty table use, MTable.isEmpty() API.|
|GEN-1144||runExamples script with already running ampool cluster fails.||Run the runExamples script after stopping the ampool cluster.|
Versions & Compatibility¶
This distribution is based on Apache Geode release (1.0.0-incubating.M3). Following table summarizes the minimum versions supported for different connectors:
|Apache Spark||2.1.0, 1.6.0|
|Apache Hive||0.14.0, 1.2.1|
|Apache Kafka||0.10.0.1/confluent-3.0.0 or confluent-3.2.0|
- Code examples: A set of code samples showcasing table and coprocessor API can be found under