Ampool 1.1 is the second packaged/ documented release of Ampool targeted towards near-app analytics - essentially for those users who are looking for analytics from data 'exhaust' of their Apps. This segment is also referred to embedded analytics.
What's New in 1.1.0¶
Following are some highlights of this release:
Core data store
Upgraded underlying Geode platform to 1.0.0-incubating.M3 Release
Reduced memory footprint for table entries
Coprocessor support for unordered table
Few bug fixes for corner cases in the scan operation
Interfaces & APIs
CLI (MASH): Added support for scan and delete; added options to specify column types or a JSON schema during table create operation
REST: Added CRUD support for MTable
- Spark: Added option to specify number of redundant copies; notifications for new data arrival
Besides, several internal enhancements were made for reducing memory footprint to enable more user data per GB of DRAM.
Following packages are available for download from Ampool's (S3) website:
Ampool Base Package (ampool-1.1.0.tar.gz): Includes Ampool core (MTable, CoProcessors, Local DiskStore for recovery, etc) and core interfaces (MASH & Java API)
Ampool Compute Connectors:
Installing Ampool v1.1.0¶
Core Ampool Server & Locator¶
- Untar the binaries in a new installation directory. After extracting the contents from the package, you should see the following directory structure:
bin config docs examples lib tools
- For launching ampool services, start the command-line utility MASH (Memory Analytics Shell) by typing the following from the installed ampool directory (
$ <ampool-home>/bin/mash mash>
Type 'help' for a list of commands.
For a detailed explanation of ampool services and commands, please refer to the README within the main directory.
Untar the Ampool connector packages on the Ampool client nodes (ampool-spark-1.1.0.tar.gz, ampool-hive-1.1.0.tar.gz)
Refer to README in the respective packages to install and use the Ampool connectors with Spark and Hive.
Upgrading to Ampool v1.1.0¶
Core Ampool Server & Locator¶
Stop the existing Ampool Server(s) and Locator(s) using Mash CLI
Untar the new Ampool core package and start the Ampool Server and Locators using new binaries.
Make sure provide the previous version's server and locator directories using --dir option in Mash CLI when starting the Server and Locators.
- Untar the newer verison of connector packages and use them in place of previous verion in the classpath when using Spark and Hive with newer version of Ampool
|GEN-1203||AggregationClient.rowCount causing java.lang.OutOfMemoryError: GC overhead limit exceeded when the number of Keys are 100m|
Known Issues & Limitations¶
|Issue Ref||Description||Workaround (if any)|
|GEN-1161||On local developer machines, if ampool services are started without specifying the host, it may bind to wifi address, which changes with moving location. In such scenarios, reconnecting to the locator from MASH fails.||Manually kill ampool (locator and server) processes and restart the services. Alternatively, specify localhost or stable network interface to bind these services.|
|GEN-1205||Repeating a delete operation for an intended delete does not throw an exception.||Avoid repeating the delete operation from the Java APIs.|
|GEN-905||Coprocessor can not be called on an empty table.||Endpoint coprocessor execution is not supported on empty table, To check for empty table use, MTable isEmpty() API.|
|GEN-1144||runExamples script with already running ampool cluster fails.||Run the runExamples script after stopping the ampool cluster.|
|GEN-1235||For a persisted MTable, the number of records returned by a scan after cluster restart may differ from the number of records returned by scan before cluster restart.||Re-run scan operations, as required.|
|GEN-1229||Scan may return more records than actual if a failover happens during the scan. This is because of internal re-tries of operation.||In case of cluster failures, re-run the scan operation, as required.|
|GEN-1228||Scan operation may fail if a failover happens while scan is running.||Re-run scan operations, as required.|
Versions & Compatibility¶
This distribution is based on the latest Apache Geode release (1.0.0-incubating.M3). Following table summarizes the minimum versions supported for different connectors:
|Apache Spark||1.5.1, 1.6.0|
|Apache Hive||0.14.0, 1.2.1|
Ref/ Links to docs¶
Java API: You can find javadoc for Ampool client API under
Code examples: A set of code samples showcasing table and coprocessor API can be found under