The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Book Contents Book ContentsCPS Installation Guide for VMware, Release 22.2.0
If there are large number of VMs in your CPS deployment it is recommended to perform a Manual Deployment for one VM (for test purposes). After the success of the first VM, then all VMs can be deployed using Automatic Deployment process.
Before deploying the VMs, build the VM images by executing the following command from the Cluster Manager VM:
Building /etc/broadhop. Copying to /var/qps/images/etc.tar.gz. . Copying wispr.war to /var/qps/images/wispr.war Output images to /var/qps/images/ [root@hostname]#
This section describes the steps to deploy each VM in the CPS deployment individually. To deploy all of the VMs in parallel using a single command refer to Automatic Deployment of All CPS VMs in Parallel. To deploy a selective list of VMs in parallel using a single command refer to Automatic Deployment of Selective CPS VMs in Parallel.
Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly.
For each host that is defined in the Hosts tab of the CPS Deployment Template spreadsheet execute the following:
The following command uses the short alias name (qns01 qns02 etc.) as defined in the Hosts tab of the CPS Deployment Template. It will not work if you enter the full hostname.
where, $host is the short alias name and not the full host name.
Newly deployed VM/VMs need to be shutdown cleanly and started with your preferred method to reserve memory:
cd /var/qps/install/current/scripts/deployer ./deploy.sh --shutdownvm
This section describes the steps to deploy all VMs in parallel in the CPS deployment.
Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly.
Execute the following command:
The order in which VMs are deployed is managed internally.
The amount of time needed to complete the entire deployment process depends on the number of VMs being deployed as well as the hardware on which it is being deployed.
The following is a sample list of VM hosts deployed. The list varies according to the type of CPS deployment as well as the information you entered in the CPS Deployment Template.
To install the VMs using shared or single storage, you must use /var/qps/install/current/scripts/deployer/deploy.sh $host command.
For more information, refer to Manual Deployment.
This section describes the steps to deploy a selective list of VMs in parallel in the CPS deployment.
Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly.
Execute the following command:
python /var/qps/install/current/scripts/deployer/support/deploy_all.py --vms
where, is the name of the file containing the list of VMs such as:
The amount of time needed to complete the entire deployment process depends on the number of VMs being deployed as well as the hardware on which it is being deployed.
After deployment of load balancer VM, verify monit service status by executing the following command on deployed Load Balancer (lb) VM:
/bin/systemctl status monit.service
If monit service on load balancer VM is not running, then execute the following command on that VM to start it:
Newly deployed VM/VMs need to be shutdown cleanly and started with your preferred method to reserve memory:
Shut down and start Selective CPS VMs in Parallel.
cd /var/qps/install/current/scripts/deployer/support python deploy_all.py --vms /tmp/vm-list --poweroffvm
Note | Make sure that all the VMs in the list are powered OFF by using the above command. |
The passwords for the users in an HA or GR deployment are not set by default. Before you can access the deployed VMs or CPS web interfaces, you must set these passwords.
Log into the Cluster Manager VM as the root user. The default credentials are root/CpS!^246 .
Execute the change_passwd.sh script to set the password.
change_passwd.sh script can also be used to change the root user password on all VMs including Cluster Manager VM.
The change_passwd.sh script changes the password on all the VMs temporarily. You also need to generate an encrypted password. The encrypted password must be added in the Configuration.csv spreadsheet. To make the new password persisent, execute import_deploy.sh . If the encrypted password is not added in the spreadsheet and import_deploy.sh is not executed, then after running reinit.sh script, the qns-svn user takes the existing default password from Configuration.csv spreadsheet.
When prompted, enter qns.
Enter username whose password needs to be changed: qns
If password not set, then below user message will be displayed:
Currently password is not set, please change the password
If password exists, then user will be prompted for the current password
Enter current password:
You can create passphrase or password with the following limitations, when you create or change passwords:
When prompted, enter and reconfirm the desired password for the qns user.
Enter new password: Re-enter new password:
Changing password on $host. Connection to $host closed. Password for qns changed successfully on $host
If script prompts for [installer] Login password for 'root' :, enter default password ( CpS!^246 ).
Repeat Step 2 to Step 5 to set or change the passwords for root and qns-svn users.
For more information about this and other CPS administrative commands, refer to the CPS Operations Guide .
After the VMs are deployed, execute the following script from the pcrfclient01 VM:
This command synchronizes the master/slave Policy Builder subversion repositories.
The following table lists the services and ports that CPS makes available to external users and applications. It is recommended that connectivity to these ports be granted from the appropriate networks that require access to the below services.
Common Port (For HA Environment)
Deprecated Port (For HA Environment)
Custom Reference Data REST API
For a full list of ports used for various services in CPS, refer to the CPS Architecture Guide , which is available by request from your Cisco Representative.
To avoid performance impact you must reserve all allocated memory to each CPS virtual machine. For more information, refer to Reserving Memory on the Virtual Machines (VMs).
Before you perform service configuration, configure the session managers in the cluster. The database must be up and running for the CPS software.
The standard definition for supported replica-set is defined in the mongoConfig.cfg file.
Use the /etc/broadhop/ha_mongoconfig_template file to create the /etc/broadhop/mongoConfig.cfg and modify it to your requirements.
If you are using VIP for arbiter, it is always recommended to keep VIP and all mongod processes on pcrfclient02 (by default).
Consider the following guidelines for choosing MongoDB ports for replica-sets:
The replica-set script is used to create replica-sets for the following databases. For more information about the script, see Script Usage.
You have to refer to /etc/broadhop/ha_mongoconfig_template file and use this file to create /etc/broadhop/mongoConfig.cfg file based on your requirements.
All the replica set members and required information like Host Name and port number arbiter host name and port number should be defined in /etc/broadhop/mongoConfig.cfg file.
Make sure all the replica set ports defined in the mongoConfig.cfg file are outside the range 32768 to 61000. For more information about the port range, refer to http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html.
The following example shows replica-set set04:
[Beginning Set Name-Set No]
Set name i.e. rep_set04
Arbiter VM host with port number
Arbiter data directory
Primary Site Member1
Primary Site Member2
Data Directory Path for members
[Closing Set Name-Set No]
Run the /var/qps/install/current/scripts/build/build_etc.sh script from the Cluster Manager to finalize mongoConfig.cfg file after AIDO automatically takes care of updating it.
build_set.sh script copies /etc/broadhop/mongoConfig.cfg file to /var/www/html/images/mongoConfig.cfg file.
build_set.sh script is used to verify replica-set creation.
Option to view help: /var/qps/bin/support/mongo/build_set.sh --help
build_set.sh --help Replica-set Configuration ------------------------------------------------------------------------------- Usage: build_set.sh [--setname SETNAME] [--help] option1: Database name option2: Build operations (create, add or remove members) option3: Use --setname SETNAME to build or alter a specific replica-set replica-set setnames are defined in the /etc/broadhop/mongoConfig.cfg file The script applies to Database: session, spr, balance, report, portal, admin, audit and bindings db replica-sets Config Server: session_configs, spr_configs and bindings_configs db replica-sets --all : Alias for all databases in the configuration --create : Create a replica-set if force option is given, else it just validate --create-asc : Create a replica-set with set priority in the ascending format if force option is given, else it just validate --create-des : Create a replica-set with set priority in the descending format if force option is given, else it just validate --add-members : Add members to a replica-set if force option is given, else it just validate This applies to members which have been removed from the replica-set using the --remove-members and --remove-failed-members operations --remove-members : Remove specific members from a replica-set For example, a non-active member --remove-failed-members : Remove failed/not reachable members from a replica-set On occasion, replica-set members are not reachable due to network isues --remove-replica-set : Remove a replica-set --create-scripts : Create init.d script for the replica-set members if force option is given --setname : The name of a replica-set as configured in /etc/broadhop/mongoConfig.cfg --force : This option can be used with create & add-members Examples: General operation build_set.sh --all --create build_set.sh --session --create build_set.sh --session --create-asc build_set.sh --session --create-des build_set.sh --session --add-members build_set.sh --session --remove-members build_set.sh --session --remove-failed-members build_set.sh --session --remove-replica-set build_set.sh --session --create-scripts build_set.sh --help To perform build operations on a sepecific replica-set: build_set.sh --spr --create --setname set04 build_set.sh --spr --create-asc --setname set04 build_set.sh --spr --create-des --setname set04 build_set.sh --spr --add-members --setname set04 build_set.sh --spr --remove-failed-members --setname set04 build_set.sh --spr --remove-replica-set --setname set04 build_set.sh --spr --create-scripts --setname set04
If you want to use build_set.sh to create replica-set then use option --force .
When you execute execute build_set.sh –-remove_replica_set , it creates a /var/tmp/stopped- XXXX ( XXXX is the port number of the replica-set member) file on the respective sessionmgr and arbiter VM. If you want to recreate the same replica-set again on the same port then you have to manually remove the /var/tmp/stopped- XXXX file from respective sessionmgr or arbiter VM. AIDO monitors the /var/tmp/stopped- XXXX file on the VM and don’t do any action on the replica-set member if file is present.
You must create the database replica-set members on the same VM and the same port on both sites.
For example: For session manager database, among four replica-set members (except arbiter), if sessionmgr01:27717 and sessionmgr02:27717 are two members of replica-set from SITE1, then choose sessionmgr01:27717 and sessionmgr02:27717 of SITE2 as other two replica-set members as shown in following example:
[SESSION-SET] SETNAME=set01 OPLOG_SIZE=5120 ARBITER1=SITE-ARB-sessionmgr05:27717 ARBITER_DATA_PATH=/var/data/sessions.1/set1 PRIMARY-MEMBERS MEMBER1=SITE1-sessionmgr01:27717 MEMBER2=SITE1-sessionmgr02:27717 SECONDARY-MEMBERS MEMBER1=SITE2-sessionmgr01:27717 MEMBER2=SITE2-sessionmgr02:27717 DATA_PATH=/var/data/sessions.1/set1 [SESSION-SET-END]
Update the mongoConfig.cfg file with the new replica-set.
Execute the following command from the Cluster Manager to finalize mongoConfig.cfg file after AIDO automatically takes care of updating it:
/var/qps/install/current/scripts/build/build_etc.sh
To verify replica-set has been created, run the build_set.sh command for the different replica-sets. The following table describes the commands for each type of replica set:
SPR (USum) supports MongoDB hashed sharding.
The ADMIN database holds information related to licensing, diameter end-points and sharding for runtime.
This parameter configures wiredtiger cache in GB on Session Manager VMs. The configured WT_CACHESIZEGB reflects in mongo processes as --wiredTigerCacheSizeGB parameter. This is an optional parameter.
Default value: 2 GB
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache. The default WiredTiger internal cache size is the larger of either 50% of (RAM - 1 GB), or 256 MB.
For example, on a system with a total of 4 GB of RAM the WiredTiger cache can use 1.5GB of RAM (0.5 * (4 GB - 1 GB) = 1.5 GB). Conversely, a system with a total of 1.25 GB of RAM can allocate 256 MB to the WiredTiger cache because that is more than half of the total RAM minus one gigabyte (0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB).
This parameter configures wiredtiger cache in GB on arbiter VMs. The configured WT_CACHEARBSIZEGB will be reflected in mongo processes --wiredTigerCacheSizeGB parameter. This is an optional parameter.
Default value: 1 GB
Instead of the specific command described in table, you can also use the following command:
The installation logs are generated in the appropriate directories ( /var/log/broadhop/scripts/ ) for debugging or troubleshooting purposes.
Here are some examples for replica-sets:
Log in to Cluster Manager.
Refer to /etc/broadhop/ha_mongoconfig_template file and use this file to create /etc/broadhop/mongoConfig.cfg file based on your requirements.
[SESSION-SET1] SETNAME=set01 OPLOG_SIZE=1024 WT_CACHESIZEGB=2 WT_CACHEARBSIZEGB=1 ARBITER=pcrfclient01:27717 ARBITER_DATA_PATH=/var/data/sessions.1 MEMBER1=sessionmgr01:27717 MEMBER2=sessionmgr02:27717 DATA_PATH=/var/data/sessions.1 [SESSION-SET1-END] [BALANCE-SET1] SETNAME=set02 OPLOG_SIZE=1024 WT_CACHESIZEGB=2 WT_CACHEARBSIZEGB=1 ARBITER=pcrfclient01:27718 ARBITER_DATA_PATH=/var/data/sessions.2 MEMBER1=sessionmgr01:27718 MEMBER2=sessionmgr02:27718 DATA_PATH=/var/data/sessions.2 [BALANCE-SET1-END] [REPORTING-SET1] SETNAME=set03 OPLOG_SIZE=1024 WT_CACHESIZEGB=2 WT_CACHEARBSIZEGB=1 ARBITER=pcrfclient01:27719 ARBITER_DATA_PATH=/var/data/sessions.3 MEMBER1=sessionmgr01:27719 MEMBER2=sessionmgr02:27719 DATA_PATH=/var/data/sessions.3 [REPORTING-SET1-END] [SPR-SET1] SETNAME=set04 OPLOG_SIZE=1024 WT_CACHESIZEGB=2 WT_CACHEARBSIZEGB=1 ARBITER=pcrfclient01:27720 ARBITER_DATA_PATH=/var/data/sessions.4 MEMBER1=sessionmgr01:27720 MEMBER2=sessionmgr02:27720 DATA_PATH=/var/data/sessions.4 [SPR-SET1-END]
After defining the admin database details, rebuild etc.tar.gz .
After replica sets are created, you need to configure the priorities for the replica set members using set_priority.sh command. For more information on set_priority.sh , refer to CPS Operations Guide .
If it is required to configure more than seven members (including arbiters), then data members must be defined as non-voting-members in /etc/broadhop/mongoConfig.cfg file.
Non-voting members allow you to add additional data members for read distribution beyond the maximum seven voting members.
To configure a member as non-voting, votes and priority value must be configured to 0.
This configuration is done by build_set.sh and set_priority.sh scripts. So, it is expected to have priority as 0 for non-voting-member.
If there are total eight data members and one arbiter (i.e. total nine members), six must be defined as MEMBER n and all other remaining data members must be defined as NON-VOTING-MEMBER n in /etc/broadhop/mongoConfig.cfg file.
where, n in MEMBER n and NON-VOTING-MEMBER n represents number 1, 2, 3 and so on.
[SPR-SET1] SETNAME=set04 OPLOG_SIZE=3072 ARBITER=site3-arbiter:27720 ARBITER_DATA_PATH=/var/data/sessions.4 PRIMARY-MEMBERS MEMBER1=site1-sessionmgr01:27720 MEMBER2=site1-sessionmgr02:27720 MEMBER3=site1-sessionmgr03:27720 NON-VOTING-MEMBER4=site1-sessionmgr04:27720 SECONDARY-MEMBERS MEMBER1=site2-sessionmgr01:27720 MEMBER2=site2-sessionmgr02:27720 MEMBER3=site2-sessionmgr03:27720 NON-VOTING-MEMBER4=site2-sessionmgr04:27720 DATA_PATH=/var/data/sessions.4 [SPR-SET1-END]
You can have only maximum seven voting members including arbiter which can be defined as MEMBER n and ARBITER n and all other member must be defined as NON-VOTING-MEMBER n .
The session cache can be scaled by adding an additional sessionmgr VM (additional session replica-set). You must create separate administration database and the hostname and port should be defined in Policy Builder (cluster) as defined in the following sections:
After mongo configuration is done successfully (The build_set.sh script gives the status of the mongo configuration after the configuration has been finished) from Cluster Manager, run /var/qps/bin/control/restartall.sh script.
Executing restartall.sh will cause messages to be dropped.
After we modify mongoconfig.cfg file, we can run the synconfig.sh script to rebuild etc.tar.gz image and trigger each VM to pull and extract it.
From pcrfclient01 or pcrfclient02 VM, execute the following command:
The following screen prompts are displayed:
Session Sharding -------------------------------------------------------- Select type of session shard Default [ ] Hot Standby [ ] Sessionmgr pairs : Session shards per pair :
Select either Default or Hot Standby by placing the cursor in the appropriate field and pressing y .
In Sessionmgr pairs, enter the name of the sessionmgr VM pairs separated by a colon (:) with port number.
If sharding is needed for multiple sessionmgr VMs, enter the sessionmgr VM name with port separated by a colon (:), with each pair separated by a colon (:).
In Session shards per pair, enter the number of shards be added.
Example: Session shards per pair: 4
Login to ADMIN DB primary mongo sessionmgr VM using port number 27721 and execute the following commands to verify the shards:
# mongo sessionmgr01:27721 set05:PRIMARY> use sharding switched to db sharding set05:PRIMARY> db.shards.find()
# mongo sessionmgr01:27721 MongoDB shell version: 2.6.3 connecting to: sessionmgr01:27721/test set05:PRIMARY> use sharding switched to db sharding set05:PRIMARY> db.shards.find() < "_id" : 1, "seed_1" : "sessionmgr01", "seed_2" : "sessionmgr02", "port" : 27717, "db" : "session_cache", "online" : true, "count" : NumberLong(0), "lockTime" : ISODate("2015-12-16T09:35:15.348Z"), "isLocked" : false, "lockedBy" : null > < "_id" : 2, "seed_1" : "sessionmgr01", "seed_2" : "sessionmgr02", "port" : 27717, "db" : "session_cache_2", "online" : true, "count" : NumberLong(0), "backup_db" : false, "lockTime" : ISODate("2015-12-16T09:35:06.457Z"), "isLocked" : false, "lockedBy" : null > < "_id" : 3, "seed_1" : "sessionmgr01", "seed_2" : "sessionmgr02", "port" : 27717, "db" : "session_cache_3", "online" : true, "count" : NumberLong(0), "backup_db" : false, "lockTime" : ISODate("2015-12-16T09:34:51.457Z"), "isLocked" : false, "lockedBy" : null > < "_id" : 4, "seed_1" : "sessionmgr01", "seed_2" : "sessionmgr02", "port" : 27717, "db" : "session_cache_4", "online" : true, "count" : NumberLong(0), "backup_db" : false, "lockTime" : ISODate("2015-12-16T09:35:21.457Z"), "isLocked" : false, "lockedBy" : null >set05:PRIMARY>
From Cluster Manager, run /var/qps/bin/diag/diagnostics.sh script.
Currently, running diagnostics.sh --ha_proxy with qns-admin or qns-su user is not supported, It's only supported with root user.
To verify that the lbvip01 and lbvip02 are successfully configured in lb01 and lb02, perform the following steps:
SSH to lb01. The default credentials are qns/cisco123.
Check whether the virtual interface of the Policy Director (LB) is UP. Use ifconfig command to show the virtual interfaces are UP. If extra diameter interface were configured, verify the corresponding VIPs are up for the diameter interfaces.
From Cluster Manager, verify that you are able to ping all the hosts in the /etc/hosts file.
The following commands can be used to verify whether the installation was successful or not:
For more information on other CPS administrative commands, refer to CPS Operations Guide .
This command runs a set of diagnostics and displays the current state of the system. If any components are not running red failure messages will be displayed.
This command will prompt for reboot choice. Please select Y for the same and proceed.
/var/qps/bin/diag/diagnostics.sh -h Usage: /var/qps/bin/diag/diagnostics.sh [options] This script runs checks (i.e. diagnostics) against the various access, monitoring, and configuration points of a running CPS system. In HA/GR environments, the script always does a ping check for all VMs prior to any other checks and adds any that fail the ping test to the IGNORED_HOSTS variable. This helps reduce the possibility for script function errors. NOTE: See /var/qps/bin/diag/diagnostics.ini to disable certain checks for the HA/GR env persistently. The use of a flag will override the diagnostics.ini value. Examples: /var/qps/bin/diag/diagnostics.sh -q /var/qps/bin/diag/diagnostics.sh --basic_ports --clock_skew -v --ignored_hosts='portal01,portal02' Options: --basic_ports : Run basic port checks For HA/GR: 80, 11211, 7070, 8080, 8081, 8090, 8182, 9091, 9092, and Mongo DB ports based on /etc/broadhop/mongoConfig.cfg --clock_skew : Check clock skew between lb01 and all vms (Multi-Node Environment only) --diskspace : Check diskspace --get_replica_status : Get the status of the replica-sets present in environment. (Multi-Node Environment only) --get_shard_health : Get the status of the sharded database information present in environment. (Multi-Node Environment only) --get_sharded_replica_status : Get the status of the shards present in environment. (Multi-Node Environment only) --ha_proxy : Connect to HAProxy to check operation and performance statistics, and ports (Multi-Node Environment only) http://lbvip01:5540/haproxy?stats http://lbvip01:5540//haproxy-diam?stats --help -h : Help - displays this help --ignored_hosts : Ignore the comma separated list of hosts. For example --ignored_hosts='portal01,portal02' Default is 'portal01,portal02,portallb01,portallb02' (Multi-Node Environment only) --ping_check : Check ping status for all VM --qns_diagnostics : Retrieve diagnostics from CPS java processes --qns_login : Check qns user passwordless login --quiet -q : Quiet output - display only failed diagnostics --redis : Run redis specific checks --svn : Check svn sync status between pcrfclient01 & pcrfclient02 (Multi-Node Environment only) --tacacs : Check Tacacs server reachability --swapspace : Check swap space --verbose -v : Verbose output - display *all* diagnostics (by default, some are grouped for readability) --virtual_ips : Ensure Virtual IP Addresses are operational (Multi-Node Environment only) --vm_allocation : Ensure VM Memory and CPUs have been allocated according to recommendations
[root@pcrfclient01 ~]# diagnostics.sh QNS Diagnostics Checking basic ports (80, 7070, 27017, 27717-27720, 27749, 8080, 9091). [PASS] Checking qns passwordless logins on all boxes. [PASS] Validating hostnames. [PASS] Checking disk space for all VMs. [PASS] Checking swap space for all VMs. [PASS] Checking for clock skew. [PASS] Retrieving QNS diagnostics from qns01:9045. [PASS] Retrieving QNS diagnostics from qns02:9045. [PASS] Checking HAProxy status. [PASS] Checking VM CPU and memory allocation for all VMs. [PASS] Checking Virtual IPs are up. [PASS] [root@pcrfclient01 ~]#
This command displays:
This command can be executed from Cluster Manager or OAM (PCRFCLIENT).