Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 192300

Re: Best design for using 8 physical NICs on an ESXi 5.1 host

$
0
0

If the portgroups for Management, vMotion and VM Networking never use uplinks 7 and 8 then you might as well create a separate vDS dedicated for NFS. That would be cleaner, easier to understand and more simple to configure. I like this design actually or a variation I'll show below, but it still means that management portgroup is on the vDS. I really don't like this at all. People say it is safe but if vCenter is on one of the hosts that it is managing then a cold restart of your environment is going to be quite a pain. Further host profiles and vDS are both trying to control the networking (unless you exclude some portions of the networking config from the host profile) and this almost always makes life harder. It's something you probably have to experience on a large scale environment to truly appreciate how annoying it is. Environments with less then 20 hosts it probably isn't much of an issue because you can take the extra time required to get it right and it doesn't blow out the total time required to implement the entire solution.

 

Also when you put portgroups on a vDS you have to recognize that those portgroups are now available for VMs to be attached to. Every environment I've ever worked in where mgmt, vMotion, FT were on a vDS with VM Networking some idiot connects several VMs up to portgroups they shouldn't be using. Quite a security issue that, not sure why people just don't think.

 

The following design is a derivative of the mrlesmithjr design. It requires a separate VLAN for each traffic type. For each subsequent VM Networking portgroup the configuration remains the same, only the VLAN changes.

 

vDS - NIOC - Route based on physical NIC load - 4 NICs Total

dvpg-Management - dvuplink1-4 all active

dvpg-vMotion1 - dvuplink2 active, dvuplink1,3,4 standby

dvpg-vMotion2 - dvuplink3 active, dvuplink1,2,4 standby

dvpg-VMNetwork1 - dvuplink1-4 all active

dvpg-VMNetwork2 - dvuplink1-4 all active

 

 

vDS - NIOC & SIOC - Route based on IP HASH - LACP Enabled - 4 NICs Total

dvpg-NFS - dvuplink1-4 all active

 

If you environment is big enough you should consider having a management cluster where high level servers like vCenter, SSO, SQL live. This Management cluster is still the recommended best practice implementation from VMware. For this cluster you would stick to only using standard virtual switches for management to make it the most simple configuration. This will help in the event of a cold start and/or major maintenance tasks. If you have a management cluster then the risks associated with having management portgroups on a vDS are drastically reduced.

 

Cheers,

Paul


Viewing all articles
Browse latest Browse all 192300

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>