Using MASH (CLI shell)

Pre-requisites

On all cluster nodes:

  • CentOS 6.x/7.x
  • Java JDK 1.8
# Install JDK
sudo yum install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64

Install Single node Ampool cluster using MASH Client

  1. Install Ampool package
  2. Start the MASH
  3. Starting Ampool Services
  4. Starting Ampool Locator
  5. Starting Ampool Server
  6. Explore Ampool Configuration
  7. Creating and Accessing Ampool Data Store
  8. Stopping the Ampool Services
  9. Monitoring Cluster, Members & Ampool table metrics

1. Install Ampool package (ampool-.tar.gz)

To install the Ampool pacakge, untar it under appropriate directory (e.g. /usr/local/share/). Having successfully untared the package, you should see the following directory structure and files:

ampool-<version> Hence forth shall be referred as <ampool-home>
├── AmpoolEULA.pdf
├── bin
│   ├── gfsh
│   ├── gfsh.bat
│   ├── gfsh-completion.bash
│   ├── mash
│   ├── start_ampool.sh
│   └── stop_ampool.sh
├── config
│   ├── ampool_locator.properties
│   ├── ampool-log4j2.xml
│   ├── ampool_security.properties
│   ├── ampool_server.properties
│   ├── ampool_server_system.properties (since release 1.2.0)
│   ├── ampool-site.xml
│   └── cache.xml
├── docs
│   └── api
├── examples
│   ├── pom.xml
│   ├── runExamples.sh
│   └── src
├── GEODE-DISCLAIMER
├── GEODE-LICENSE
├── GEODE-NOTICE
├── lib
│   ├── activation-<version>.jar
│   ├── ampool-core-<version>.jar
│   ├── ampool-dependencies.jar
│   ...
│
├── README.md
├── RELEASE_INFO
└── tools
    ├── Extensions
    └── Pulse

Note

In the document \ refers to the full directory path in which Ampool s/w is installed (e.g. /usr/local/share/ampool-/).

2. Start the MASH (Ampool's Memory Analytics Shell)

Command "help" shows all the MASH commands. Some commands are shown not available until you connect to ampool cluster using "connect" command. MASH Shell has tab completion capability to list the available options for any MASH command.

$ <ampool-home>/bin/mash
    ______________________________     __
   / _    _   / _____  / ______/ /____/ /
  / / /  / / / /____/ /_____  / _____  /
 / / /__/ / /  ____  /_____/ / /    / /
/_/      /_/_/    /_/_______/_/    /_/    v1.1.0

Memory Analytics Shell:To Monitor and manage Ampool
mash>version
v1.1.0

mash>sh "pwd"
/usr/local/share/ampool

mash>help
alter disk-store (Available)
    Alter some options for a region or remove a region in an offline
    disk store.

alter region (Not Available)
    Alter a region with the given path and configuration.

alter runtime (Not Available)
    Alter a subset of member or members configuration properties
    while running.

backup disk-store (Not Available)
    Perform a backup on all members with persistent data. The target
    directory must exist on all members, but can be either local or
    shared. This command can safely be executed on active members
    and is strongly recommended over copying files via operating
    system commands.

change loglevel (Not Available)
    This command changes log-level run time on specified servers.

clear defined indexes (Available)
    Clears all the defined indexes.

close durable-client (Not Available)

3. Starting Ampool Services

There are two services (the Ampool Locator and Ampool Server (aka Cache Server) that need to be started to get Ampool system running. Typically Ampool cluster consists of one Locator instance and one or more instances of Servers (one server instance per cluster node). Locator instance can be colocated with Server instance on one of the cluster nodes. In a Single node cluster, single instance of Locator and Server are run on the same node. The recommended order of starting the two services is Locator first followed by Server(s). Typically in multi-node cluster environment there is a startup dependency between servers and hence all servers should be started simultaniously i.e. without waiting for one server to come up successfully before starting next one. Similarly recommended way to stop the two services is in the reverse order i.e. servers first and then Locator.

3.1 Starting Ampool Locator

The locator can be started using MASH "start locator" command as shown below. For the complete list of supported options you can refer "help start locator" command. In the following command the locator home directory specified using --dir option must exist and should have read/write/execute permissions for current user running the MASH shell. Default port if not specified is "10334".

Note

Replace \<ampool_home> w/ ampool install directory path in the following command.

mash>start locator --name=Locator1 --dir=/var/ampool/L1 --port=10334 --properties-file=<ampool_home>/config/ampool_locator.properties

Starting a Ampool Locator in /private/var/ampool/L1...
....
Locator in /private/var/ampool/L1 on 192.168.65.1[10334] as Locator1 is currently online.
Process ID: 56049
Uptime: 2 seconds
GemFire Version: 1.0.0-incubating.M3
Java Version: 1.8.0_73
Log File: /private/var/ampool/L1/Locator1.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /usr/local/share/ampool/lib/ampool-core-1.1.0.jar:/usr/local/share/ampool/lib/ampool-dependencies.jar

Successfully connected to: JMX Manager [host=192.168.65.1, port=1099]

Cluster configuration service is up and running.

3.2 Starting Ampool Server

The Server can be started using MASH start server command as shown below. The Server home directory specified using --dir option must exist and should have read/write/execute permissions for current user running the MASH shell. Server keeps persistence and overflow data in this directory. If you want to use your installed Hadoop libraries instead of Ampool packaged to initialize default ORC based tier-store, use --ext-classpath option. Following are the minimum jars (names are indicative, may change if you are using distributions of Apache Hadoop) required to initialize default tier-store.

1) hadoop-common.jar - For Hadoop FS API 2) hadoop-auth.jar - For Hadoop Util PlatformName class 3) hive-exec.jar - Hive related class like org.apache.hadoop.hive.ql.io.orc.CompressionKind 4) commons-collections-3.2.2.jar - For Unmodifiable Map.

For the complete list of supported options you can refer "help start server" command.

Note

Replace \<ampool_home> w/ ampool install directory path in the following command.

Since v1.2.0 release:
The --ampool-properties-file option is added to provide ampool\_server\_system.properties file. File is present under \<ampool\_home\>/config directory. This file contains some specific properties with recommended values to tune some operational aspects of Ampool. User can modify the values as suitable in their environment.
In future these properties will be merged into ampool\_server.properties file.

mash>start server --name=Server1 --server-port=40404 --locators=localhost[10334] --dir=/var/ampool/S1 --initial-heap=1g --max-heap=1g --eviction-heap-percentage=75 --critical-heap-percentage=90 --properties-file=<ampool_home>/config/ampool_server.properties --ampool-properties-file=<ampool_home>/config/ampool_server_system.properties

Starting a Ampool Server in /private/var/ampool/S1...
......
Server in /private/var/ampool/S1 on 192.168.65.1[40404] as Server1 is currently online.
Process ID: 56062
Uptime: 3 seconds
GemFire Version: 1.0.0-incubating.M3
Java Version: 1.8.0_73
Log File: /private/var/ampool/S1/Server1.log
JVM Arguments: -Dgemfire.locators=localhost[10334] -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /usr/local/share/ampool/lib/ampool-core-1.1.0.jar:/usr/local/share/ampool/lib/ampool-dependencies.jar

3.3 Explore Ampool cluster/Services

If using the same MASH shell instance from above where Ampool services started, then this MASH client is already connected to Cluster (Locator). Although if you have restarted the MASH shell then need to explicitly connect to cluster first to one of the cluster locators

mash>connect --locator=localhost[10334]

Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192.168.65.1, port=1099] ..
Successfully connected to: [host=192.168.65.1, port=1099]

mash>list members
  Name   | Id
-------- | -------------------------------------------------
Locator1 | 192.168.65.1(Locator1:64250:locator)<ec><v0>:1024
Server1  | 192.168.65.1(Server1:64295)<ec><v1>:1025

mash>describe member --name=Server1
Name        : Server1
Id          : 192.168.65.1(Server1:64295)<ec><v1>:1025
Host        : 192.168.65.1
Regions     : .AMPOOL.MONARCH.TABLE.META.
PID         : 64295
Groups      :
Used Heap   : 136M
Max Heap    : 989M
Working Dir : /private/var/ampool/S1
Log file    : /private/var/ampool/S1/Server1.log
Locators    : localhost[10334]

Cache Server Information
Server Bind              : null
Server Port              : 40404
Running                  : true
Client Connections       : 1

mash>describe connection
Connection Endpoints
--------------------
192.168.65.1[1099]

mash>describe config --member=Server1 --hide-defaults
Configuration of member : "Server1"

JVM command line arguments
---------------------------------------------------
-Dgemfire.locators=localhost[10334]
-Dgemfire.use-cluster-configuration=true
-XX:OnOutOfMemoryError=kill -KILL %p
-Xms1g
-Xmx1g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=60
-Dgemfire.launcher.registerSignalHandlers=true
-Djava.awt.headless=true
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806

GemFire properties defined using the API
...........................................................
name                                     : Server1

Cache attributes
...........................................................
is-server        : true
pdx-persistent   : true

Cache-server attributes
 . tcp-no-delay         : true

mash>

Single node Ampool Cluster is all set to work!

Creating and Accessing Ampool Data Store

There are various ways in which applications can Create tables in Ampool Data store and access/manipulate the data. Analytics applications can use the compute frameworks like Apache Spark, Apache Hive to access Ampool Store.

4. Stopping the Ampool Services

The Ampool services (both locator and server) can be stopped using MASH commands. It is recommended that the servers are stopped before the locator. Assuming the above listed Locator and Sever are running, you can use below command to stop a single server. 

<ampool-home>/bin/mash -e "connect" -e "stop server --name=Server1" -e "stop locator --name=Locator1"

(1) Executing - connect

Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192.168.65.1, port=1099] ..
Successfully connected to: [host=192.168.65.1, port=1099]


(2) Executing - stop server --name=Server1

...

(3) Executing - stop locator --name=Locator1

....
No longer connected to 192.168.65.1[1099].

No longer connected to 192.168.65.1[1099].

Alternately if you want to stop all the member servers in the Ampool cluster ((optionally including the Locator), you can use the shutdown command.

Warning

Stopping all server members may cause data-loss if the disk-persistence is not enabled for the tables.

<ampool-home>/bin/mash -e "connect" -e "shutdown --include-locators=true"

(1) Executing - connect

Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192.168.65.1, port=1099] ..
Successfully connected to: [host=192.168.65.1, port=1099]


(2) Executing - shutdown --include-locators=true

Shutdown is triggered

5. Monitoring Cluster, Members & Ampool table metrics

Ampool extends Geode's JMX monitoring for Ampool tables but MASH client also provides basic capability to view the detailed metrics for Ampool Cluster, its members i.e. Server and Locator instances and also for the tables created. Following example shows various metrics availables for these entities,

$>/<ampool_home>/bin/mash -e "connect" -e "show metrics" -e "show metrics --member=L1" -e "show metrics --member=S1" -e "show metrics --region=/test"

(1) Executing - connect

Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=10.0.0.173, port=1099] ..
Successfully connected to: [host=10.0.0.173, port=1099]

(2) Executing - show metrics

Cluster-wide Metrics

Category  |        Metric         | Value
--------- | --------------------- | -----
cluster   | totalHeapSize         | 8271
cache     | totalRegionEntryCount | 45
          | totalRegionCount      | 2
          | totalMissCount        | 2
          | totalHitCount         | 0
diskstore | totalDiskUsage        | 1112
          | diskReadsRate         | 0
          | diskWritesRate        | 0
          | flushTimeAvgLatency   | 0
          | totalBackupInProgress | 0
query     | activeCQCount         | 0
          | queryRequestRate      | 0

(3) Executing - show metrics --member=L1

Member Metrics

  Category    |              Metric              | Value
------------- | -------------------------------- | -------------------
member        | upTime                           | 594
              | cpuUsage                         | 0.13410000503063202
              | currentHeapSize                  | 73
              | maximumHeapSize                  | 3641
jvm           | jvmThreads                       | 66
              | fileDescriptorLimit              | -1
              | totalFileDescriptorOpen          | -1
region        | totalRegionCount                 | 0
              | totalRegionEntryCount            | 0
              | totalBucketCount                 | 0
              | totalPrimaryBucketCount          | 0
              | getsAvgLatency                   | 0
              | putsAvgLatency                   | 0
              | createsRate                      | 0
              | destroyRate                      | 0
              | putAllAvgLatency                 | 0
              | totalMissCount                   | 2
              | totalHitCount                    | 0
              | getsRate                         | 0
              | putsRate                         | 0
              | cacheWriterCallsAvgLatency       | 0
              | cacheListenerCallsAvgLatency     | 0
              | totalLoadsCompleted              | 0
serialization | serializationRate                | 0
              | serializationLatency             | 0
              | deserializationRate              | 0
              | deserializationLatency           | 0
              | deserializationAvgLatency        | 0
              | PDXDeserializationAvgLatency     | 0
              | PDXDeserializationRate           | 0
communication | bytesSentRate                    | 0
              | bytesReceivedRate                | 0
function      | numRunningFunctions              | 0
              | functionExecutionRate            | 0
              | numRunningFunctionsHavingResults | 0
transaction   | totalTransactionsCount           | 0
              | transactionCommitsAvgLatency     | 0
              | transactionCommittedTotalCount   | 0
              | transactionRolledBackTotalCount  | 0
              | transactionCommitsRate           | 0
diskstore     | totalDiskUsage                   | 0
              | diskReadsRate                    | 0
              | diskWritesRate                   | 0
              | flushTimeAvgLatency              | 0
              | totalQueueSize                   | 0
              | totalBackupInProgress            | 0
lock          | lockWaitsInProgress              | 0
              | totalLockWaitTime                | 0
              | totalNumberOfLockService         | 2
              | requestQueues                    | 0
eviction      | lruEvictionRate                  | 0
              | lruDestroyRate                   | 0
distribution  | getInitialImagesInProgress       | 0
              | getInitialImageTime              | 0
              | getInitialImageKeysReceived      | 0
offheap       | maxMemory                        | 0
              | freeMemory                       | 0
              | usedMemory                       | 0
              | objects                          | 0
              | fragmentation                    | 0
              | compactionTime                   | 0

(4) Executing - show metrics --member=S1

Member Metrics

  Category    |              Metric              | Value
------------- | -------------------------------- | ----------------------------
member        | upTime                           | 534
              | cpuUsage                         | 0.33914342522621155
              | currentHeapSize                  | 177
              | maximumHeapSize                  | 989
jvm           | jvmThreads                       | 51
              | fileDescriptorLimit              | -1
              | totalFileDescriptorOpen          | -1
region        | totalRegionCount                 | 2
              | listOfRegions                    | test
              |                                  | .AMPOOL.MONARCH.TABLE.META.
              | rootRegions                      | /test
              |                                  | /.AMPOOL.MONARCH.TABLE.META.
              | totalRegionEntryCount            | 45
              | totalBucketCount                 | 1
              | totalPrimaryBucketCount          | 1
              | getsAvgLatency                   | 0
              | putsAvgLatency                   | 0
              | createsRate                      | 0
              | destroyRate                      | 0
              | putAllAvgLatency                 | 0
              | totalMissCount                   | 0
              | totalHitCount                    | 0
              | getsRate                         | 0
              | putsRate                         | 0
              | cacheWriterCallsAvgLatency       | 0
              | cacheListenerCallsAvgLatency     | 0
              | totalLoadsCompleted              | 0
serialization | serializationRate                | 0
              | serializationLatency             | 0
              | deserializationRate              | 0
              | deserializationLatency           | 0
              | deserializationAvgLatency        | 0
              | PDXDeserializationAvgLatency     | 0
              | PDXDeserializationRate           | 0
communication | bytesSentRate                    | 4725
              | bytesReceivedRate                | 0
function      | numRunningFunctions              | 0
              | functionExecutionRate            | 0
              | numRunningFunctionsHavingResults | 0
transaction   | totalTransactionsCount           | 0
              | transactionCommitsAvgLatency     | 0
              | transactionCommittedTotalCount   | 0
              | transactionRolledBackTotalCount  | 0
              | transactionCommitsRate           | 0
diskstore     | totalDiskUsage                   | 1112
              | diskReadsRate                    | 0
              | diskWritesRate                   | 0
              | flushTimeAvgLatency              | 0
              | totalQueueSize                   | 0
              | totalBackupInProgress            | 0
lock          | lockWaitsInProgress              | 0
              | totalLockWaitTime                | 0
              | totalNumberOfLockService         | 2
              | requestQueues                    | 0
eviction      | lruEvictionRate                  | 0
              | lruDestroyRate                   | 0
distribution  | getInitialImagesInProgress       | 0
              | getInitialImageTime              | 0
              | getInitialImageKeysReceived      | 0
offheap       | maxMemory                        | 0
              | freeMemory                       | 0
              | usedMemory                       | 0
              | objects                          | 0
              | fragmentation                    | 0
              | compactionTime                   | 0

(5) Executing - show metrics --region=/TestMTable

Cluster-wide Region Metrics

Category  |            Metric            | Value
--------- | ---------------------------- | -----
cluster   | member count                 | 1
          | region entry count           | 44
region    | lastModifiedTime             | -1
          | lastAccessedTime             | -1
          | missCount                    | -1
          | hitCount                     | -1
          | hitRatio                     | -1
          | getsRate                     | 0
          | putsRate                     | 0
          | createsRate                  | 0
          | destroyRate                  | 0
          | putAllRate                   | 0
partition | putLocalRate                 | 0
          | putRemoteRate                | 0
          | putRemoteLatency             | 0
          | putRemoteAvgLatency          | 0
          | bucketCount                  | 1
          | primaryBucketCount           | 1
          | numBucketsWithoutRedundancy  | 0
          | totalBucketSize              | 44
          | averageBucketSize            | 44
diskstore | totalEntriesOnlyOnDisk       | 0
          | diskReadsRate                | 0
          | diskWritesRate               | 0
          | totalDiskWriteInProgress     | 0
          | diskTaskWaiting              | -1
callback  | cacheWriterCallsAvgLatency   | 0
          | cacheListenerCallsAvgLatency | 0
eviction  | lruEvictionRate              | 0
          | lruDestroyRate               | 0

Install Multinode Ampool cluster using MASH

Installing multi-node Ampool cluster is no different than single node install.

  • Install Ampool package on all the cluster nodes (preferably at the same directory path).
  • One of the cluster nodes should start Locator and Server while other nodes should start one Server instance per node.
  • Make sure each server name is unique within a cluster
  • Start Locator before Server members and provide the "locator-host[port]" address while starting the servers
  • If remotely starting the multiple servers using shell script through SSH, make sure ssh command is run in the background, so all the servers are started in parallel and not waiting for one server to come-up before starting next server.
  • Use any of the cluster nodes (or a separate node w/ Ampool package installed) as client to connect to cluster locator and use the Ampool.

Setup multinode secured ampool cluster

Refer to the instructions at Setting up secured cluster for setting up secured ampool cluster using kerberos/LDAP authentication