Use Freenas For Vmware Vsan

15.10.2019by admin
Use Freenas For Vmware Vsan Average ratng: 5,5/10 837 votes

Use the iSCSI target service to enable hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore. This feature enables an iSCSI initiator on a remote host to transport block-level data to an iSCSI target on a storage device in the vSAN cluster. VSAN 6.7 and later releases support Windows Server Failover Clustering (WSFC), so WSFC nodes can access vSAN. Jul 27, 2018  I run FreeNAS on two different, large hosts chalk full of drives with iSCSI running and all the VMware hosts mount both iSCSI datastores from both NAS's. I then use VMware HA (which keep's the VM's running on 2+ hosts and sync'd in both datastores). All most hosts have 10Gb cards for iSCSI/vMotion traffic and standard 1Gb cards for network access.

I wanted to write a blog about my FreeNAS installation. I’ve been testing out FreeNAS 9.3 and find it to be well suited for my home-lab.Having read through several posts about not to run FreeNAS as a VM, and others blogs saying, “you can, but shouldn’t”, and some “yes you can, just make sure” I wanted to try out for myself and find out if I could make a stable setup in my home-lab.To start with, I have been running FreeNAS 9.3 on one of my physical hosts, booting from a USB. The setup was pretty stable, but I believe my cheap USB stick that I used for boot died as after a reboot yesterday the bios did not find the USB drive to boot from.That gave me a reason to make some changes. I want to test out FreeNAS 10.1 that is available as a nightly build. I also didn’t want to run the FreeNAS setup on one of my physical hosts as the host is a Dell R710 server, Dual X5675 Xeon 3Ghz 6 Core CPU’s, having 288GB of ram, 6 x 2TB disks and a Dell H700 Controller. – The machine was a total overkill just to run FreeNAS for my other Dell R710 VMware host with same specs.So, the main issue I have in regard of running FreeNAS that uses ZFS, is the fact that my server has the Dell H700 controller (LSI 2108 based), and that controller is unable to work in IT mode (IT Mode is a “non-Raid” mode, where each HDD is visible to the OS without creating Disk Volumes on the RAID controller)ZFS wants to see the pure disks without a Raid controller, and some controllers can be installed with an IT mode firmware or have this natively as the Dell H200 controller.

Vsan

– I didn’t want to experiment with cross-flashing the controller I have with an original LSI firmware, and I’m not sure if that would enable me to run IT mode on the LSI 2108 chip anyway.I decided to go ahead, and find a solution I could use, and what I found that was recommended was to present the SAS controller via PCI Pass-through to the FreeNAS VM, and as I wanted to use all me 6 2TB Drives for the ZFS system, and the R710 server I have has 6 3.5″ HDD bays, I had to find a way to create a datastore for the FreeNAS VM configuration files and boot disk. This option turned out to be a no-go for me as I only have one controller in my server and I can’t use PCI pass-through as well as have a datastore for the FreeNAS VM. I carried on though and went with the option to use RDM for the disks to the FreeNAS vm.What I did was to carve out a 50GB volume of one of the 2TB disks when I created the Virtual Disk in the H700 controller.I then created a secondary volume for the remaining space on the disk. What you have to note here, that you have to create the other VHD’s with the same size as the first VHD, as FreeNAS won’t be able to put different sized disks into the same ZFS raid volume. I wanted to test VSAN in my lab without having to go out and buy SSD’s or invest in more hardware in my labThe obvious path was to spin up several ESXi VM’s and do the settings in regard of networking and set normal HDD based volume as SSD. And to make things easy for you I took screenshots and wrote down every setting and step I made in this blog post.To prepare networking for the ESXi VM’s you have to set Promiscuous mode to “Accept” in the security settings for the portgroup you place your ESXi VM’s on. You should not do this in a production installation on your whole vSwitch.

In my lab I created a “NestedESXi” portgroup, where I enabled promiscuous mode by overriding the vSwitch default setting of “Reject” aThis allows packets to travel from your physical nic on your ESXi host, up to the virtual nic of your virtual ESXi host, and up to its virtual VM’s virtual nic. Think inception + communications between each state.Next thing to do is to create the ESXi VM’s.

Freenas Vmware Nfs

Attention, Internet Explorer UserAnnouncement: VMware Communities has discontinued support for Internet Explorer 7 and below.In order to provide the best platform for continued innovation, VMware Communities no longer supports Internet Explorer 7.VMware Communities will not function with this version of Internet Explorer. Please consider upgrading to Internet Explorer 8, 9, or 10, or trying another browser such as Firefox, Safari, or Google Chrome.(Please remember to honor your company's IT policies before installing new software!).