Pivotal Knowledge Base

Follow

PivotalVRP User Manual

Version 6.0 pdf link:

http://pivotalvrp.docs.pivotal.io/doc/6000/pdf/PivotalVRP-6.0-User-Guide.pdf

Table Of Contents

  • Overview
  • System Architecture
    • Opening the Application
  • Managing System Settings
    • User Management
    • General Settings
    • Repository Settings
    • My User Settings
  • Viewing Data from the Database
    • Understanding the Dashboard
    • Using Right-click Functions for Managing Resources
      • Managing SQL Resources
      • Creating Rules from Data
      • Rule Action
      • Creating Rules from SQL Text
      • Copying SQL Text
      • Creating a tuning task
      • Tagging SQL Text
      • Display session info
      • Display explain plan
      • Kill Session
      • Changing User
  • Administering Rules
    • Managing Modules
      • Adding a New Module
      • Removing a Module
      • Editing a Module
    • Managing Rules
      • Adding a New Rule
      • Modifying Rule Activation Overrides
      • Modifying a Rule
      • Removing a Rule
      • Rules best practices
  • Graphs
    • The PivotalVRP Graph Window
      • Top Queries
      • Load vs. Runtime
      • Statistics
      • Query Distribution
      • Query Analysis
      • Runtime statistics
      • Right click and double click functions
  • Add-Ons
    • Add-Ons button bar
      • Tuning Tasks
      • Chargeback
      • Rules Wizard
      • Variance
      • Inquire
      • Playback
      • Performance BI
  • Important files within PivotalVRP

 

Chapter 1 – Overview

PivotalVRP (Virtual Resource Partitioning) enables your organization to execute Virtual Resource Partitioning of hardware resources (CPU, I/O) of your IT transactions based on business priorities and needs in real-time. By building the VRP, your organization is able to manage the load of each transaction individually, providing a better QoS and response time for all your transactions.

PivotalVRP software boosts performance and availability of database-dependent applications in real-time. PivotalVRP is applicable to a wide spectrum of DB-based enterprise environments, in particular database infrastructure supporting ERP, CRM, DW, in-house applications, and other mission-critical applications.

PivotalVRP communicates with the database and the operating system and then displays the results on a dashboard. Armed with this information, you are then able to define specific rules to reduce the resources allocated to exceptionally heavy transactions that could cripple your system while leaving resources available for other transactions.

PivotalVRP collects various statistics, correlates them and then logs them into an internal repository. You can then monitor the system-defined rules and correct them as necessary. When a rule is defined, the virtual partitioning of the server resources is carried out immediately. It is completely transparent to the database and therefore there are no special configuration modifications required at the database level.

PivotalVRP provides the following functionality:

  • Immediate real-time improvement within your IT systems.
  • Real-time capability to reallocate resources in order to reduce unexpected heavy loads in systems in production.
  • Real-time and historical monitoring of your databases.
  • Control of resources and management of rollback that avoids both additional unwanted loads as well as any need to kill transactions
  • Improved Performance, Availability, Stability and QoS throughout the daytime hours.
  • Ability to specify computing resources (CPU, I/O) for specific transactions.
  • Ability to run BI, batches, backups and other heavy transactions together in the middle of the day by simply limiting their resources instead of delaying them.
  • Ability to manage existing resources more efficiently.
  • Rules and modules for ERP, CRM, DBA tools, BI and Data Warehouse that can be defined and fine-tuned to specific needs and added to when required.
  • Total transparency to users and applications.
  • Definition of custom rules quickly and easily.
  • Reduction in use of power and datacenter capacity.
  • Allocation of resources so that the most important transactions have the lowest response times and the most resources.
  • Receipt of alerts anywhere.
  • Acceleration of transactions for high-priority modules or users that you want to run faster in accordance to your business needs.

Chapter 2 – System Architecture

PivotalVRP is composed of the following two main elements:

  • PivotalVRP Console

PivotalVRP Console is an MS Windows-based WEB_GUI console that can be installed on any PC that runs a 64 bit clean Windows. The console allows you to monitor your various databases, investigate performance issues and apply manual or automatic rules as needed.

  • PivotalVRP Agent

PivotalVRP Agent is installed on the database server (on each segment server) itself. The agent is controlled by the console and performs the actual resource allocation according to the rules.

PivotalVRP Agent's footprint in an average database is 0.5% of a single CPU, where, for example, on a machine with four CPUs, PivotalVRP consumes ~0.12% of the machine's CPU capacity. The system's console is installed on a dedicated Windows PC/Server on the network, collects various statistics, correlates them, and then logs them into an internal repository.

The system can be easily monitored and controlled by pre-defined rules, and corrected as required. When a rule is defined, the virtual partitioning of the server's resources is performed immediately by the PivotalVRP Agent. Fully transparent to the database, it does not require special configuration at the DB level. Data is clearly displayed on the system dashboard on the PivotalVRP Console. The dashboard allows users (DBAs and managers) to identify problems in the SLA, and pinpoint any heavy transaction that may be the cause of the problem. Users can create rules and prioritize the resource usage directly from the dashboard.

PivotalVRP supports Greenplum version 4.x and higher both CE & SNE databases on the following operating systems: Red Hat Linux, SUSE, Linux and Oracle Enterprise Linux.

 

2.1 Opening the Application

PivotalVRP is a web based application. This gives you the flexibility to use the PivotalVRP application using any internet browser.

To open the PivotalVRP application:

  1. Launch a web browser and type the PivotalVRP server IP address and port number as follows:
    https://<MoreServerIPAddress>:Port9191/<MoreVRP HTML page> in the address field.

    For example: https://XXX.YYY.ZZZ.XYY:9191/MoreVRP.html.


  2. Enter the company name (registered at the time of installation), login in as administrator with the default password, and click on Login. This field enables you to keep track of which PivotalVRP domain you are logged in in environments with multiple PivotalVRP installations.

    Note: The default password for the Administrator is "Admin2010". It is highly recommended to change all default passwords during the first activation of PivotalVRP.

  3. After Login to PivotalVRP you need to connect to the GP. Therefore you need to configure the connection to the GP:

    Select the Settings TAB from the Main Menu.



  4. The General Settings screen opens.

  5. Click on the Add button and the Server Information window opens. Enter the following information:

    Connection Name
    : Enter whatever you wish to call the master view.
    Host Name / IP
    : x.x.x.x - x.x.x.x is the IP of the master node (it can be the PUBLIC IP) / When adding single segments, the IP of the segment host.
    Port
    : When using the auto connect the master port (default is 5432)/ When adding single segments, the port of the segment.
    Username/Password: As defined (for the PivotalVRP connection to the DB)
    Connection Type: Select Automatic enumeration for connection to the full environment / Single Segment if you wish to connect to a single segment
    Test Connection : Verifies that the setting are correct and that PivotalVRP can communicate with the segments

    Press OK.
  6. Provided PivotalVRP was installed correctly, you will see the segments populating the Known Server list.
  7. Once you are connected, select Dashboard from the Main Menu in the left pane.

You can now start working with PivotalVRP.

If you encounter any problems during the above procedure, contact PivotalVRP support, at : support.pivotal.io

The General Settings screen



The General Settings table below details the login information and system settings displayed in this screen:

Setting

Description

Connection Name

Name of the view

Host Name / IP

The IP of the Master node or segment server, depending on the connection type

Port

default for master (for auto connect) is 5432, for single connection use the port

User Name

As defined according to the installation manual

Password

As defined according to the installation manual

Connection Type

Selector for full environment connection vs single segment.

Test Connection

Verifies the connection settings and communication with the segment(s)

Automatically connect upon disconnection

By default this is checked, attempts re-connection to segments if connection is lost

Limit number of reconnect attempts

By default this is unchecked, gives you manual control of re-connection attempts

Data and statistics polling settings

 

Data polling interval

Data polling interval from the database memory

Statistics polling interval

Statistics polling interval from the OS

Connection timeout

Timeout settings in seconds

Maximum IO

IO per second threshold setting

Repository setting

User definable from 2 weeks to 12 month*
(* it is recommended to go up to 6 month)

Dashboard - Administrator Mode

 

  1. Return to the Privileges screen, and click Logout, as defined in the User Privileges settings (see Managing User Privileges).
  2. Login into an account with viewer privileges

In Viewer Mode the Rules are not applicable. In settings only the "My User Settings" is accessible.

Dashboard - Viewer Mode

 

 

Chapter 3 – Managing System Settings

Administrators are responsible for managing the system settings, including:

  • User Management
  • General Settings
  • Repository Settings
  • My User Settings

 

3.1           User Management

 

Administrators control users access to the application.

To Manage User Privileges:

  1. As an Administrator, from the Settings menu option, select User Management.

    User Management Screen


  2. User privileges

User type

Functions

Administrator

Opening and closing the application
Connect to database
Settings definition:

  • General System Settings and login information
  • Repository Settings
  • User Access Privileges

Create and activate rules
Create and activate SQL enhancers
Create reports and graphs
All right-click functions

Operator

Opening and closing the application
Connect to database
Access to all screens
Limited right-click functions
Activate pre-defined rules

Viewer

View only access within the main application:

  • Dashboard
  • Graphs & Modules except for Locks view
  • Settings (only My User Settings & My EmailGraphs Settings are accessible)

 

3.Define the required user privileges as described in the table below:

Setting

Description

User Name/Password

Login and passwords the user.

Servers:

Allows to choose what access level is enabled on which database. (administrator/operator/viewer)

Minimum manual control

Define the level of Operator control allowed for manually managing the resources

  1. Define Minimum manual control – allows you to define for each user the minimum resource control allowable.
  2. Click on Save Privileges to save the changes made to the User access control settings.

Note: If you forget your Administrator password, you may contact the PivotalVRP support center through Zendesk support tickets.  Following a short verification procedure, you will receive a password reset file for the Administrator user.

3.2           General Settings



To manage General System settings:

  1. From the Settings menu option, select General.


    The General Settings screen is displayed

  2. Enter the required settings (see General Settings Table).
  3. Once the environment has been defined, the known servers list will display the current connections and their status ( - connected and OK).

3.3           Repository Settings

To manage the Repository settings:

  1. From the Settings menu option, select Repository Settings.
  2. Type in the Keep Data value (default: 3 month).
  3. Click Save Settings.

The Repository Settings Screen

3.4           My User Settings

Once the administrator has defined users via the Settings tab; administrators, operators and users all have access to the My User Settings area. This area provides access to personalized settings within PivotalVRP.

The first tab, My Operational Settings, contains general information such as email settings and logout timing. Make sure that for User Name you enter a complete email address, for example: johndoe@email.com.

Once you have entered your custom settings, click Save Settings.

My User Settings Screen

 

The second tab, My Graph Settings, allows each user to save his/her own preferred time periods for all graphs and add-ons. This allows you to complete long drill-downs or investigations over a period of time without the need to reset the dates after each logout. In this area you can also set the default SQL sensitivity.

Once you have entered your custom settings, click Save Settings.

 

Chapter 4 – Viewing Data from the Database

PivotalVRP collects all the information about transactions submitted to and from the database, which typically represent communications between user applications and the database, database functions, batch files, etc. The collection of the information about the transactions, which is executed at the operating system level, is a non-invasive process which does not affect the database in any way. The information that is collected is displayed on the PivotalVRP Dashboard. Via the dashboard you can see any resource problems within your applications and identify any heavy transactions that may be affecting its efficiency. In addition, you can create rules and prioritize the resource usage directly from the dashboard.

 

4.1           Understanding the Dashboard

 

The table below describes each element displayed in the Dashboard:

Dashboard Components

Description

Acceleration Chart

Displays the relationship between transactions that are running and those that are managed.

Health (QoS) Indicator

Calculates the Quality of Service provided by the database server to reflect the user experience in terms of transaction response time.

Sessions Log: Database Data

Displays a list of transactions currently running on the database server with live information.

In high level view, clicking the little triangle on the left of the session will open up the session tree with all the slices info.

Double-clicking on a session drills down to show more information about the specific session, right clicking displays additional options

CPU Utilization Chart

Graphically displays the current percentage of CPU of the currently selected server

I/O Operations Chart

Graphically displays the current usage of read/write I/O operations of the currently selected server

Memory

Graphically displays the memory usage of the currently selected server in a pie graph (free vs used)

Networking

Graphically displays the amount of network usage, both received and sent of the currently selected server.

Server Selection

Enables the selection of the managed server for dashboard viewing from a pull down list of connected servers.

 

4.2           Using Right-click Functions for Managing Resources

You can manage resources, create rules and prioritize the resource usage directly from the dashboard by using the right-click functions that become available when selecting a transaction in the Database Data area.

The table below lists the right-click functions that are available to each user type:

Right-click Function

User Type

Manage SQL Resources

Administrator and Operator

Create Rules from Data

Administrator only

Create Rules from SQL Text

Administrator only

Copy SQL Text

Everyone

Create tuning task

Everyone

Create/Edit SQL tag

Administrator and Operator

Remove SQL tag

Administrator and Operator

Display session info

Everyone

Display explain plan

Everyone

Show estimated runtime

Everyone

Playback the last minutes

Everyone

Analyze query

Everyone

Show query segmentation

Everyone

Kill Session

Administrator and Operator

 

4.2.1        Managing SQL Resources

To Manage SQL Resources:

  1. Right-click on the relevant transaction (either the entire session or individual slices), then select Start Managing Resources.

    The Manual Control dialog box is displayed.


  2. Using the slider set the throttle at the required setting, then click OK. The Manual control range can be defined differently for each user by the administrator.
  3. The control affects both CPU and IO at the same level, and relate to the maximum potential resource usage of the session.

The Acceleration Chart will change to display the new level of resource management.

4.2.2        Creating Rules from Data

Using the information displayed in the dashboard, to determine the level of resource management required, you can easily create a rule that will have an immediate effect on the managed resources. This option automatically locates within the system most of the relevant data required to create a rule, requiring you to proactively define only a minimum number of parameters.

For further information about creating rules, see Managing Rules.

To create a rule from data:

  1. Right-click on the required transaction, then select Create Rules from Data.

    The Defines Rule dialog appears. This dialog contains 3 sections:

    Rule Information
    Rule Thresholds – which contains four tabs: Basic, General, Advanced, Rule Activation Overrides
    Rule Action


    The Define Rule dialog


  2. In the Rule information section, enter a new name for the rule, and rule remark if desired, in the relevant fields.
  3. The Rule Thresholds section consists of four tabs:
    1. Basic Tab
      Use the Basic Tab to define:


      DB name (right-clicking enables select all servers function).
      Physical Reads / Write IO.
      CPU usage.
      SQL text.
      Define the run time required, and the time period required for running the rule.

    2. General Tab
            
      Use the General Tab to define:


      Machine name (which runs the SQL).
      The user which runs the SQL.
      The Program that runs.
      All can be controlled by normal (literal) or by Regex.

    3.       Advanced Tab
        
      Use the Advanced Tab to define:


      Module
      Action
      Client InfoHash Value
      SQL ID
      All can be controlled by normal or by regex

    4. Rule Activation Overrides

      Use the Rule Activation Tab to define:


      Rule activation threshold defined by CPU load level in %
      Rule activation threshold defined by Physical IO load level
      Rule activation threshold defined by both CPU and Physical IO loads
      Allow activation by other rules
      Rule Priority
      The Rule activation overrides are triggered by resource usage at the server level.

 

4.2.3        Rule Action

In Rule Action you can choose what action a specific rule will trigger. A rule can trigger a certain level of control of the available resources, the execution of scripts, notification or the activation of another rule. You can define the same situation and have multiple actions happen in parallel. For instance if a particular user runs a certain transaction and the load on your system is beyond a pre-defined threshold you can set PivotalVRP to notify you via e-mail and reduce the resources available by a certain percentage.

Rule Action Dropdown Menu

 

There are four types of Rule Action:

  • Control
  • Execute Script
  • Send Email
  • Activate Rule

 

 

  • Control


 

    • With control you can use the slider to set the throttle level for the rule.
    • To select the option Distribute resources equally among all controlled jobs, click the checkbox and click on OK. This will distribute resources equally to similar processes. This can be used to avoid parallels.

 

 

 

  • Execute Script


 

    • When selecting this you can activate scripts with this rule.
    • For additional information, there is a white sheet covering scripts which includes samples.

 

 

  • Send Email


    • When selected the rule will send notifications to the recipients.
    • Before use the mail settings have to be defined.
    • The Test Settings button will check you connection.
    • The Test Recipient will send a test e-mail to the recipient.

 

 

  • Activate Rule


    • The rule will activate a different rule.
    • Activate rule allows users to create "fast lanes" for priority tents or processes
    • In order to have rules available for activation the "Allow activation by other rules" on these rules need to be enabled.



The rule is automatically added to the list of rules in the application (see Administering Rules).

4.2.4        Creating Rules from SQL Text

This option for creating a rule, after reviewing the information displayed in the Dashboard, automatically provides the SQL Text contained in the transaction, requiring you to define the remaining parameters.
For further information about creating rules, see Managing Rules

To create a rule from SQL Text

  1. Right-click on the required transaction, then select Create Rules from SQL Text.
    The Define Rule dialog box is displayed with Rule Limits parameters automatically defined.


  2. Enter a new name for the rule, and rule remark, if required, in the relevant fields.
  3. Define the required parameters for the Rule Limits. Use either literal or regular expressions to match your criteria.
    For example – to cover all Schema (Usernames) except Schema that contains XXX, in its name, use: ^((?!XXX).)*$ in the Username field and then select the regex button.
    Another example: If you want to capture everything except a specific schema name, ABC, use:
    \b(?!(?:[Aa][Bb][Cc])\b)[\w']+\b in the Username field and then select the regex button.
  4. Define the required run time, and the required time period for running the rule.
  5. Choose the action you would like to use after the rule is applied. See Rule Action for details.
  6. If required, select the Distribute resources equally among all controlled jobs check box, then click OK.

The rule is automatically added to the list of rules in the application (see Administering Rules).

4.2.5        Copying SQL Text

To copy SQL Text:

  1. Right-click on the required transaction, and select Copy SQL Text.

The SQL Text is copied to the clipboard and you may paste it to any other SQL editor.

4.2.6        Creating a tuning task

To create a tuning task:

  1. Right-click on the required transaction, and select Create tuning task.

    The Create tuning task dialog box is displayed with opening date automatically defined.

    2. Fill the Task Creator, Developer Name, Developer Email and Remarks fields.
    3. Click the Task Query tab to display the information that is being sent to the developer.
    4. Clicking OK will have the Tuning Task available for follow up within the Tuning Task add-on.


 

4.2.7        Tagging SQL Text

To tag SQL Text

  1. Right-click on the required transaction, and select Edit SQL Tag.
    The Set a Tag dialog box is displayed.



  2. Enter a tag for the selected SQL Text, and click OK.

It can be any free text {/ SQL text / or DB Table

4.2.8        Display session info

To display a session info:

  1. Right-click on the required transaction, and select Display session info
    A new window will open with the session info.
  2. All the important information of the transaction is displayed within the window.

 

4.2.9        Display explain plan

To display the explain plan:

  1. Right-click on the required transaction, and select Display explain plan

A new window will open with the execution plan for the selected transaction.

 

4.2.10    Kill Session

To kill a session:

  1. Right-click on the required transaction, and select Kill Session.
    A confirmation messages is displayed.


  2. Click yes to continue.

The selected transaction is stopped.

4.2.11    Changing User

You can change the user type currently defined in the system from the Dashboard.

To change user:

  1. In the Dashboard, (Main GUI) click logout.



    The Privileges Login dialog box is displayed.

     

  2. Select the required user, enter the relevant password, and click Login.

The application functionality changes according to the user type now logged in.

Chapter 5 – Administering Rules

Rules are a set of thresholds and triggers that are user-configured directing the application on what action to take (control resources, send email alert, activate script or activate other rule) based on the transaction behaviour.

Rules can be defined according to various factors such as:

  • A specific SQL or part of it
  • The Database username which executed the transaction
  • The Machine sending the actual transaction (in the connected host)
  • The Hostname that initiated the transaction
  • The program that launched the transaction
  • According to the Module/Action of the transaction
  • The duration of the transaction
  • The amount of CPU or I/O the transaction consumes
  • The level of utilization of the database server

 

5.1           Managing Modules

Modules are a set of rules that are grouped together in a logical folder which facilitates the activation of multiple rules configured within the application.

5.1.1        Adding a New Module

To add a new module:

  1. Select Rules TAB from the option in the main menu.
     
     
     
    The Rules screen is displayed.
     
     
     
  2. In the Modules pane, click Add Module or right-click in the Module pane and select Add Module.
    A new branch is added to the Module tree.
     
     
     
  3. Enter the required name for the new module.

5.1.2        Removing a Module

In the Modules pane, select the required module, and click Remove Module or right-click in the Module pane and select Remove Module.

  1. A new window pops up requesting your action
  2. Select move rules to a different module or select the remove button.
  3. Once the rules moved to a different module or have been deleted the Module will disappear from the Module tree
  4. When you remove a module you will only be able to remove rules that you are allowed to administer



The module is removed from the Module tree.

5.1.3        Editing a Module

  1. In the Modules pane, select the required module, and click Edit Module or right-click in the Module pane and select Edit Module.
  2. Enter a new name for the module.

5.2           Managing Rules

Rules can be easily built and customized to facilitate efficient management of resources.

5.2.1        Adding a New Rule

To add a new rule:

In the Rules screen, click Add Rule.

The Define Rules dialog box is displayed.



The table below describes the elements that are displayed in the Dashboard:
Use regular expression matching if necessary or literal (default)

New Rule Parameter

Description

Rule Name

Enter a name of the new rule

Rule Creator

Enter the name of the creator

Last Update

Time stamp of rule creation/update

Rule Remark

Enter any relevant remarks for the new rule

DB Name –
In the Basic TAB

Right-Clicking allows to select all servers or clear all
Selections.

Physical Reads/Writes –
In the Basic TAB

Enter the number of physical Reads/ Writes of a transaction to trigger the rule

CPU Usage –
In the Basic TAB

Enter level of CPU usage for the new rule

Buffer Hit Ratio –
In the Basic TAB

Enter Hit Ratio

SQL Text –
In the Basic TAB

Enter the SQL text for the new rule (or part of it)

Run Time –
From/To Time
In the Basic TAB

Set the run time for the new rule
Set the time period for the new rule

Machine –
in the General TAB

Enter the machine that is running the application

User Name –
in the General TAB

Enter the user name

Program –
in the General TAB

Enter the program running the application

Module –
in the Advanced TAB

The module name (if the application tags its sessions)

Action –
in the Advanced TAB

The action name (if the application tags its sessions)

Client Info –
in the Advanced TAB

-

Hash Value –
in the Advanced TAB

 

SQL ID –
in the Advanced TAB

 

Rule Actions

Defines the Action created by the rule:

  • Control – manages resources

    Select check box to distribute resources equally among jobs controlled by the same rule.
  • Execute script – allows the execution of scripts
  • Send Email – send an e-mail
  • Activate rule – activates a pre-selected rule

 

 

 

  1. Enter all the required information and click OK.

The rule is added to the list of rules displayed in the Rules screen.

Notes:
All the textual fields are treated with a "Contains" filter. This means that if you type in a partial string only, all the strings that contain the inserted string will trigger the rule. For example, if you enter "count *" in the SQL Text field, all queries that contain this text will be managed.
If some of the fields remain empty, PivotalVRP ignores the filter. For example, if you leave the UserName field empty, the rule will apply to every user in the database.

5.2.2        Modifying Rule Activation Overrides

If you want rules to be automatically activated only when the entire database server is under load, you can use the Rule Activation Overrides. These rules allow you to configure the amount of resources the server should reach prior to automatically activating the rule.

To modify the global thresholds of a rule:

  1. In the Rules screen select the required rule and double click on it.
    (or while creating a rule)
    The Rule page shows up, select Rule Activation Overrides until it's displayed.

     

  2. Define the thresholds as described in the table below:

Threshold

Description

Only activated when total CPU load is above

Select the check box and enter the CPU load percentage to activate the rule when the CPU load reaches the defined percentage

Only activated when total Physical IO is above

Select the check box and enter the total I/O percentage to activate the rule when the total I/O reaches the defined percentage

Use both CPU and IO thresholds

Select the check box to use both the CPU and I/O thresholds for this rule

Allow activation by other rules

Select the check box to allow other rules to activate this rule

Rule Priority

In case a certain event can is identified by 2 rules the rule with the highest priority will take precedent.

5.2.3        Modifying a Rule

 

To modify a rule:

  1. In the Rules screen, select the required rule, and click Edit Rule .

    The Edit Rules dialog box is displayed.
  2. Makes the necessary modifications to the rule, and click OK.

5.2.4        Removing a Rule

To remove a rule:

  1. In the Rules screen, select the required rule, and click Remove Rule.

NOTE: there is no Remove Module dialog box to display.



5.2.5        Rules best practices

There are 3 types of rules:

  • Rules of stability : • These are in effect rules that are created in order to keep the DB environment at optimal operation levels. They are defined and triggered by thresholds defined at the server level, at any point when any single server gets to a particular load level, the heaviest resource consumers are restricted in levels according to their loads. They are grouped and can be defined by CPU usage, IO usage and runtime while using resources. Usually these will be defined in incremental steps
  • Targeted Rules :
    • These can be triggered by a single parameter or a series of them. These are usually used when you are targeting a specific user, process, application etc...
  • General Rules :
    • These are created for certain situations but are applied to less specific processes. A good example is that when you have multiple background processes running and you want to have additional resource bandwidth available, you create these rules with thresholds lower than the rules of stability. These are usually used during certain periods and activated on processes that run beyond a particular set time.
  • Important! The PivotalVRP resource management and rule engine operate in real time, once the triggers aren't valid the VRP ceases.
  • Please note: When using the rules to trigger alerts keep in mind that every single occurrence will send an email, so when sending runtime based alerts select only the master as the db for the trigger.

Chapter 6 – Graphs

To access the graphs:

  1. From the Main Menu, select Graphs.

 

Note: The data that is presented in each of the graphs is data that is held in the Repository. This data is updated based on the frequency defined in the Data cycling parameter in the Repository Settings screen (see Repository Settings).

6.1           The PivotalVRP Graph Window

The graph section is divided into several tabs. Each tab gives you access to a different graph. The purpose of these graph is to visualize your database performance from different aspects. They allow you to track your performance (based on your repository) and see how PivotalVRP is boosting the ROI, improving your SLA and making better use of your resources.

 

6.1.1        Top Queries

This graph uses a filtering mechanism, based either on Runtime, to display the queries that take the longest to run in the system, or based on I/O, to display the queries that consume the largest amount of resources.

To create the Top Queries graph:

  1. In the Choose Filters section, select the filter type from the Filter based on drop-down list.
    • Runtime, CPU per Sec., Reads Per sec., Writes Per Sec., Hitratio and Optimizercost.
  1. In the Number of queries to filter field, enter the required number of filters.
  2. In the from and to fields, select the Date period that you want to view.
  3. In the DB Name field, select the DB that is being controlled. (right-click enables select all servers/clear options)
  4. Set the SQL Sensitivity as follows:
    • Lowest: presents high-level information about the queries so that similar SQL statements are treated as one query (40 characters)
    • Highest: drills down into the queries to present more detailed information so that similar SQL statements are presented individually (1000 characters)
    • Hashvalue: will present the top queries according to their actual hash value.
  1. Click Calculate.
    The information is displayed.
     
     
  2. Right Clicking a session will open a drop down menu which allows the user to copy the SQL text, create a tuning task, create a rule or export the info to csv.
     
     
  3. By double click on the SQL Text you will get more information about the specific SQL by way of Query Segmentation graph.
     

 

Some of the options include copying the SQL text, playing back events (analysis) creating tuning tasks, rules and exporting to csv.

 

6.1.2        Load vs. Runtime

This graph is used to present information to show how the load on the system affects user experience when running a query. The graph compares the load on the system to the runtime of the query, enabling you to see the correlation between CPU or IO load and the execution time of queries. This enables you to build Rules accordingly, paying particular attention to the Rule Global Thresholds (see Modifying Rule ).

** The Load vs. Runtime is more relevant to Legacy (non MPP type databases), in MPP environments due to the fact that multiple servers make up of the entire environment the IO load tab calculations less relevant.

To view the Load vs Runtime graph:

  1. In the Choose Filters section, select the filter type from the Filter based on drop-down list. (CPU Per Sec., Reads Per Sec., Writes Per Sec.)
  2. In the Host Name field, select the server that is being controlled.
  3. Set the Start Date and Start Time, and End Date and End Time.
  4. In the Number of Queries field, select the number of queries to be compared.
  5. Click Calculate.

The information is displayed.
 

 

The graph displays the top queries as defined in the Number of Queries field. In the tab of each query is the name of the query and the number of occurrences during the defined time period.
The chart will present the breaking point at which user experience is slowing down and performance is deteriorating.

6.1.3        Statistics

The Statistics graph presents the level of IO & CPU resource consumption. When you mouse over any point on the graph you get numerical values of the point selected. In order to get a better picture of any given point you have the ability to right click and zoom in on a shorter period of time, this will provide a higher level of resource usage detail. Please note that since you are reviewing an MPP environment the top level view doesn't always provide a full picture, you have the ability to get this information at the segment level too by selecting an individual segment in the DB name pull-down menu.

To create the Statistics graph:

  1. Select the Environment or segment in the DB name pull down menu.
  2. In the Choose Filters section, set the "from" Date and "to" Date period, and the Start and End Time.
  3. In the Host Name field, select the server that is being controlled.
  4. Click Calculate.

The information is displayed.
 

 

In the Graph GUI there are 2 sections:
CPU Load.
IO/s (read & write).

6.1.4        Query Distribution

The Query Distribution graph shows how the load is distributed over several entities.

To create the Query Distribution graph:

  1. In the from and to fields, select the required time period.
  2. In the Host Name field, select the server that is being controlled.
  3. From the Calculate According To drop-down list, select the required option as follows:
    • Total Reads Percent
    • Read Per Sec. Percent
    • Total Writes Percent
    • Writes Per Sec. Percent
    • Runtime Percent
    • CPU Per Sec. Percent
    • SQL Text
    • User Name
    • OS User
    • Application
    • Module
    • Action
    • Optimizer Cost
    • Hit Ratio
    • Client Info
    • Command Type


Note: The selections are based on the Rule Creation dialog box.

  1. Select the number of slices to be presented in the pie-chart.
  2. Set the SQL Sensitivity as follows:
    • Low: presents high-level information about the queries so that similar SQL statements are treated as one query
    • High: drills down into the queries to present more detailed information so that similar SQL statements are presented individually
    • Hashvalue: will select the queries according to their actual hash value.
  1. Select the required Sort By option.
  2. Click Calculate.

The information is displayed.
 



SQL text, User Name, OsUser, Application, Module and Action – can be furthermore filtered by :

  • Execution
  • I/O's
  • Hit Ratio
  • Optimizer Cost
  • CPU
  • Runtime

Double Clicking any part of the pie will open a new window representing the Top Queries (up to 1000)
 
 

6.1.5        Query Analysis

This graph allows you to create a custom-made graph based on any of the filters provided here. The graph is divided into 3 tabs which allow a very specific drill down by allowing specific filtering parameters.

The Query Analysis graph queries raw data, therefore is limited to 12 hour data segments at a time.  In busy environments it is recommended to explore shorter periods of time in order to receive faster results.

To create the Query Analysis graph:

Set any of the provided filters, and click Calculate.

The information is displayed.



 
 
 


Double –clicking any of the queries will open up the following graph which shows the runtime segmentation of the particular query

 

 

6.1.6        Runtime statistics

The Runtime Statistics window provides graphical resource consumption and performance information at the touch of a button. You can define your query runtime thresholds and filter the specific information you seek. The Transactions per second (TPS) vs. health graph shows the overall load effects on your systems health. Correlating the information on the 3 graphs can quickly point out how resource usage and TPS affect performance. Hovering on any point of the graphs will provide numerical values for the point selected.
 
 
 

6.1.7        Right click and double click functions

Within the graphs section, PivotalVRP v6 has a variety of additional functions available. As previously shown within the live session dashboard section, setting the cursor at different locations open various options. In order to facilitate drill-downs outside of PivotalVRP, you can export results in various formats.

 

As can be seen above if you right-click any graph within PivotalVRP a window will open with the option to export graph to image. Once selected, a new window will open with the link to the image (see below). If you click the link, a new web page will open with the copied image. Should you want to retrieve it at a later time all exported material from PivotalVRP are saved in a folder at the following path: "C:\Program Files\MoreDB\GlassFish\glassfish\domains\MoreVRP\docroot\Exports".

 



As can be seen above if you right-click any table within PivotalVRP a window will open with the option to export to CSV. Once selected, a new window will open with the link to the CSV. If you click the link a new web page will open with the data. The file is available in the same folder as the images within PivotalVRP in Glassfish. It can be easily opened in excel or any other comparative program which imports CSV. The data within the file is delimited by semicolons (";").

 


 

Chapter 7 – Add-Ons

7.1           Add-Ons button bar

The Add-ons are additional modules that add features to the PivotalVRP system, they are developed by the PivotalVRP development and engineering team.
There are a few add-ons available for the PivotalVRP system. These Add-ons are important additions that based on your usage of the PivotalVRP system and/or your environment contribute to your workflow. They include: Tuning Tasks, Chargeback, Rules wizard, Variance, Inquire, Playback and Performance BI. This section will provide brief description for each add-on.

 

7.1.1        Tuning Tasks

The tuning task add-on is a feature that allows you to create a workflow for transactions that need fine tuning or that create problems within your database. It is the perfect tool for follow ups and pulls all the pertaining information needed with the click of a button.

The tuning task is available both from the Dashboard window and several graphs (right-click on transaction) and through the Tuning Tasks button which opens the Tuning tasks window.


 

  

The tuning task window lets you track all tasks and their status. As an administrator you can change their severity level and edit or remove them when necessary. At a glance you can see which developer is working on the task and what the date of the last update is.

   

7.1.2        Chargeback

The Chargeback add-on was created as an essential tool for PivotalVRP users and providers that use or provide database services or for organization wanting to be able to track their tenants resource usage. It allows you to set pricing to resources and services provided on a per user basis. With a simple to use interface all you need to do is define your tennants through a variety of parameters available. You start in the customer setting window, once you have created the user, you can define its mapping through various available parameters. The add-on also includes a built in invoice generation, summary billing per customer and more.

 

There are two main views to the Chargeback add-on, the customer setting view contains all the info about your customers and the billing view which has all the information pertaining to billing such as Ad hoc billing, customer billing and invoices. The Billing View window is divided with three tabs, here you have the ability to create ad-hoc bills, review your customer billing, and create invoices with a right click and review all invoices created. The Ad Hoc Billing tab enables you to create on the fly customers and evaluate resource costs by tenant on a single screen

 

 

 

7.1.3        Rules Wizard

The Rules Wizard add-on is the helping hand you need when you want to optimize your workflow and set a series of rules based on your current loads and the way you want to handle and prioritize them. It gives you step by step instructions and recommendations as you go. It allows you as well to set priority levels to the rules you want to implement based on their importance.

 

Once you define the business process and the relevant priority and times, you are asked to define how to activate the rule (through Top SQL, Program or other parameters). Once you defined the trigger, right-click it and set the effect of the manageable painful transaction (MPT)

 

The third step in the process opens up the list of rules created by the module. Double clicking any rule at this point will open up the standard rules window with all settings as created by the module. You can modify them and edit them from this location or from the standard Rules module, they are located in the Business Needs rules module.

 

 

7.1.4        Variance

The Variance Add-on is the DBA's ultimate tool. Given the vast amount of information available within the PivotalVRP repository the variance Add-on is an amazing analytic tool. It can help you prepare your database upgrades by comparing processes running in two different environments. In depth graphs show you where you have performance improvements and where the performance deteriorated. By pinpointing these transactions, you can quickly decide if your database is ready for the upgrade or if you still need to prepare it before you go online with it. It's simple graphs and in-depth analytic ability allow you to quickly determine in a performance degradation situation what has changed. The tab, "what's new" gives you immediate access to the changes that happened within comparative time frames that have different performances. The Skew Analysis provides performance information relating to skewing within your segments.

Statistics comparison will display the CPU & IO resource consumption differences.

 

The Skew Analysis was developed specifically for Greenplum and MPP environments, now with just a few clicks you have the ability to discover skewing. Through this screen you can also discover the distribution itself.  You can select the type of skew you want to discover, a pull down menu lets you choose between Runtime, CPU time and IO skew.
As with most PivotalVRP windows, double-clicking any result will open a new window with all the details and specifics relating to the selected session.

 

The CPU Performance, Runtime Performance & I/O performance tabs compare CPU, runtime and I/O consumption between two selected environments.

 

What's New displays sessions that were running in one environment and not the other. Having the ability to dig into comparative performance data and information will often provide the insight into performance degradation causes. When running multiple variance models on a single baseline can also help forecast future loads and environment behavior.

 

 

7.1.5        Inquire

The Inquire module is the DBA's Swiss Army knife.
It consists of 3 useful tools that help make a DBA's life easier.


 
The Inquire's module locks view window displays all current lock in your database.
In case you have blocking locks, you can view them by pressing the Get Blocking locks button.
Start by selecting the master segment from the DB name pulldown menu.
Right clicking the lock will allow you to export the information.
As an administrator and operator you have the option to release them through a right-click option. (IMPORTANT! - Releasing the lock will KILL the locking session)

 

The second tab is the DCR report window which enables you to create customized DCR reports.

 

The DCR (Detailed Custom Report) provides a wide variety of information regarding the state of the GP environment.
Information such as free space per segment, size of databases, resource queues and more is available through this report.
In order to generate the report select the master segment from the pull down menu and select the information you wish to display,
pressing generate report will start the process.
When the report is ready the status log icon will flash red.
Once generated you can always revisit existing DCR reports by retrieving them from the existing reports list.

Having this type of report available can help you understand how your GPDB environment grows over time and keeping this info for future comparison can help you with forecasts and trending including those relating to structure and capacity.

 

The DCR reports are kept on the PivotalVRP server

(MoreDB/Glassfish/glassfish/domains/MoreVRP/docroot/Exports)

The tab on the right is the Resource Queue window which allows you to quickly view, create modify queues & roles  within your GPDB/HAWQ environment.  By default if you have not created queues you will have the pg_default queue to which all the defined roles are assigned to.  

You can add a queue by clicking the add button at the bottom of the screen.


The PivotalVRP Inquire, Resource Queue screen

Clicking the add (or Modify) button will open up a screen which allows you to define the Queue parameters.


The PivotalVRP Inquire, queue definition window

On the right-hand side of the screen you can see the Role list with Queue assignment, righ clicking (or double clicking) any role will open up a window which enables you to re-assign a role to a queue from the available list.


The PivotalVRP Inquire, role assignment window

** Please note that Roles that are superusers bypass all resource queuing configuration.

If you want to track statistics and performance of resource queues, you need to enable this feature. This is done by setting the following server configuration parameter on the master postgresql.conf file:
stats_queue_level = on
Note that enabling this feature does add a slight resource overhead, as each query submitted through a resource queue must be tracked.

 

 

7.1.6        Playback

The Playback is a virtual time machine. At PivotalVRP we know that situations don't happen out of the blue. The chain of events leading to a situation will usually point to the reason to the situation. With the unique Playback module you control time. Just set the time you want to investigate press play and see processes run on the display as they happened in the past!


Control time!

 

In case you want to get more info or start the investigation here, you have the ability to use right-click actions.

7.1.7        Performance BI

The Performance BI module lets you dynamically create a drill down path based on your specific needs.

 

To start the investigation into performance with the Performance BI module, you need to define a few parameters:

  • Time Frame
  • DB name (s)
  • Resource order (CPU Time, runtime, executions, reads per sec., etc…)

Once this is defined you start building the drill down path by dragging filters in the order selected to the work plane (up to 6 filters). At any point you can take a step back and recalculate. The results are displayed both in a list form and graphically (top 10). By double clicking any selection the filters are populated with the information selected and the tree branches out. Right clicking any selection brings up a few options such as opening a tuning task, showing the statistical analysis (with details of the selected session) or exporting the selected list as a csv. Right clicking the graph allows you to export it as a jpeg.

 

Right clicking any result will present a host of options, selecting Query Segmentation will open an new window with the performance metrics related to the results.

 

 Ideally you can use PerformanceBI for longer time frames (over 12 hours) but in order to get granular information about items you should drill down into shorter periods of time (>12 hours, preferably an hour or less)

 

 

Chapter 8 – Important files within PivotalVRP

The table below lists important files that are located in the MoreDB directory on the PivotalVRP server.

File name

Description

RuntimeLog.log

More Log file

RuntimeDebugLog.log

More Debug log file (if exists)

Rules.db

Rules file

Settings.db

Setting and configuration file

MoreDB.key

More License file

Privileges.rmf

Security configuration file (encrypted)

Servers.db

Servers configuration file (passwords encrypted)

InstallUtil.InstallLog

Installer log

DBDataTable.xml

Setting file

AuditLog.log

Log file of all IP & User names of all Rules and Modules state changes including kill operations.

 

 

 

Comments

Powered by Zendesk