How-to Upgrade Cisco Nexus 1000v

Here is a great article on how to upgrade Cisco Nexus 1000v in the virtual environment,

Upgrade Cisco Nexus 1000v Firmware

This is a really good post by Mark Strong.

The only thing I would replace in the article are useing tftp server instead of using scp to the nexus 1000v vsm using putty in Step 4 of the article.



Upgrade of vCenter 5.5 to vCenter 6.0 with External PSC

I was pretty surprised to see that there are very few posts on the internet detailing the process of upgrading vCenter 5.5 to vCenter 6.0 with an External PSC. so here is my recent experience on how I did it


vCenter server 5.5 (Windows)

vCenter SSO 5.5 (Windows)

SQL Server 2012 (Windows)

  1. Take snapshots of the vCenter server, vCenter SSO server and SQL server
  2. take a backup of the vCenter database in the SQL server
  3. mount the “VMware-VIMSetup-all-6.0.0-2562643.iso” to the Windows vCenter SSO 5.5 server
  4. Go ahead and run the autorun program and run the vCenter setup program
  5. The program will automatically detect the SSO 5.5 in the server and let you know that it will be upgraded to PSC 6.0
  6. Click next to continue to go ahead and upgrade the SSO 5.5 to PSC 6.0

This will complete the upgrade process of SSO 5.5 to PSC 6.0

Once this process is complete, we can go ahead and upgrade the vCenter 5.5 to vCenter 6.0 as the SSO 5.5 is upgraded to PSC 6.0

This concludes the way in which we can upgrade the vCenter 5.5 to 6.0 using external PSC if there is already an SSO 5.5 server available in the environment.

Upgrade or Patch An External PSC (Platform Services Controller) vCenter 6.0

Today, I had to patch an external PSC controllers at a customer site, and here is the process I followed:

  1. First, take a snapshot of the platform controllers (PSC01 and PSC02)
  2. Enable SSH on the platform controllers (SSH), I did this by logging in from the DCUI of the PSC appliance and enable SSH
  3. Use a client like Putty to SSH into PSC01
  4. Before doing anything, go ahead and mount the iso, in my case it was “VMware-vCenter-Server-Appliance-6.0.0.xxxxx-36xxxxx-patch.iso” to the PSC01 virtual machine
  5. After logging into the PSC01 using SSH, login to PSC01 and perform the following commands
  6. Command> software-packages stage --iso --acceptEulas
  7. The above command will stage the code in the appliance
  8. Once the staging is complete, we do the following
  9. Command>software-packages install --staged
  10. This will start the installation of the staged code in the PSC
  11. we repeat the same steps for PSC01

here is the screen of the commands:


PS: I have observed that the installation process takes quite a while to complete.

Here is the error I have come across when upgrading the PSC:


NOTE: I have received an error “specified group ‘Ip’ unknown” while upgrading the PSC and the solution is to enable IPv6 on the PSC before starting the upgrade. This solution was taken from the release notes here

EDIT: I have observed that even if you enable IPv6 on the appliance, you still get the message “Specified group ‘Ip’ unknown”, however the upgrade process fine and completes the upgrade.


EDIT: You could also use the following command if you don’t want to stage the code

Command>software-packages install --iso --acceptEulas



Nutanix NX3060 Node and its first Observations

Hello All,

I was able to get some hands on experience with a Nutanix NX 3060 box recently and here are my observations regarding the box and its components and its functionalities.



Model                                  NX-3060-G4

Number of Nodes            4

Number of Hosts             4

Hypervisor                         ESXi 6.0

Total num of disks           24           (8 SSD, 16 HDD)


User Interface

The user interface for Nutanix is PRISM

Once you power on the Block, the ip address of the management leads you to the web console as shown below:



Once you get in, the main console is really good. I liked the way the things are arranged on the main dashboard,


The main dashboard has the summary of the Hypervisor installed on the nodes in this block and its version. (here its ESXi 6.0), summary of storage capacity in the cluster as well as used storage and free storage available, summary of vms present on the esxi hosts in this cluster (4 node cluster), hardware summary of all the nodes present in this cluster, graphs on iops, latency and network bandwidth passing through the cluster. It also shows the overall CPU Usage, Cluster Memory Usage (which looks like it refreshes continuously) .

It also shows the overall health of the Nutanix Cluster which can be further drilled down to its respective components. It shows the Data resiliency status of the cluster. In this test machine, since it has 4 nodes in a cluster (default nodes to make a cluster are 3 nodes in Nutanix) the cluster is said to be resilient since it has more than 3 nodes.

Alerts including Critical, Warning, Informational and Events are also displayed on the main dashboard once you login into the interface. It’s a good addition to see the critical alerts right when you login into the main web interface and you can drill down into the alert to know more info on that particular alert.

Here are some of the things I liked worthy of reviewing:

Health and state of the Nutanix Block as well as the virtual components (esxi hosts and vms)

The health of the Nutanix block is displayed on the main page once you login into Prism (Nutanix software which keeps track of all the components of a Nutanix block/cluster and provides a user web interface)

You can click on the scroll down menu on the top of the page and click on Health which displays all the component’s health in details which can again be drilled down to its components.



Alerts and Warnings of the components in the Nutanix Block

The web interface provides Critical alerts and multiple alerts right on the main web page after signing into the prism interface, these show you what the alert is and once you click on the alert, it will take you deeper to see the cause and resolution to the issue.

These can also be accessed through the Alert option from the dropdown menu




Storage and its ease of creation and providing the storage container to the esxi hosts

Creating a storage pool and creating a storage container in Nutanix we console is very easy and anyone without a background in storage can accomplish too. We do this by going into the Nutanix web console and clicking on the option storage from the dropdown menu on the top left side of the page.

Once in the storage page, click on storage pool and create a pool using all the disks presented or according to your requirements (Nutanix recommends to create one pool with all the available disks and create multiple containers), once the pool is complete, click on the container option in the same page beside the storage pool and create a container with your required space in the storage pool and you are all set to provide that container to the esxi host!

This view also shows you how many containers are present and how many vms are present on that container.


Things I observed which were not being reported correctly –

There were few things which I observed that were not being reported correctly in the Nutanix web console are the health of the hosts in the hardware view. I have a host down in my environment due to a metadata store issue and its down along with its controller vm but the Nutanix web console still doesn’t show it in its main overview of the Hardware tab, but reports it in Diagram and Table view.