09.10.2019
Posted by 
  1. Freenas Vmware Snapshots
  2. Freenas On Esxi
Use freenas for vmware vsan windows 10

They are totally different things.A SAN or VSAN is a block storage system usually accessed by an iSCSI client or an HBA (host bus adapter).FreeNAS is a (for the most part) file level access open source replacement for Windows, Mac and NFS file servers.In a vm FreeNAS loses one of the main feature/attractions which is to pool a group of differing size/make/model hard drives into a common large storage pool. I am avoiding the separate discussion about the woes of corrupted FreeNAS storage pools deliberately as not part of your question. You can find dozens of threads on SW covering that issue.If you have access to virtualization your better free choice is to spin up a Linux server and run Samba on a virtual hard drive for file sharing space.

Quick Tip – iPerf now available on ESXi by William Lam 11 Comments The other day I was looking to get a baseline of the built-in ethernet adapter of my recently upgraded vSphere home lab running on the Intel NUC.

Vmware

Which gives you better backup and recovery options.Most people here will strongly advise you to avoid FreeNAS for any business use entirely. Alain3888 wrote:is there anyone out here that uses FreeNas as a 'VSAN' solution.we are looking in to this in more detail i see FreeNas a lot and we might give it a try.We are planning to install FreeNas on a vm with seperate vhds, is this best practice are are there other solutions (we are currenlty testing out Starwinds how it works on a hyper v host)Starwinds works amazing on Hyperv (Although they are moving away from that soon I believe). But I wouldn't use a NAS as a SAN or ISCSI target unless it is for a lab. Alain3888 wrote:is there anyone out here that uses FreeNas as a 'VSAN' solution.we are looking in to this in more detail i see FreeNas a lot and we might give it a try.We are planning to install FreeNas on a vm with seperate vhds, is this best practice are are there other solutions (we are currenlty testing out Starwinds how it works on a hyper v host)Is there any option you could describe more about your production environment? Probably the replies could be even more useful having more specifics about your case, what you have to run on the storage, kind of performance and estimations. Doommood wrote:alain3888 wrote:is there anyone out here that uses FreeNas as a 'VSAN' solution.we are looking in to this in more detail i see FreeNas a lot and we might give it a try.We are planning to install FreeNas on a vm with seperate vhds, is this best practice are are there other solutions (we are currenlty testing out Starwinds how it works on a hyper v host)Is there any option you could describe more about your production environment?

Probably the replies could be even more useful having more specifics about your case, what you have to run on the storage, kind of performance and estimations.Well we have 4 hyperv nodes with dedicated storage mix between ssds and sas drives. We are running win server 2012r2 std.Ideaal solution is that we have software that combines all of the storage and that we can have 1 repo for storage.Now we lose lots of time to check where we locate the vm the best base on storage capacity ram is no issue since we insert the maximum amount of ram in it. Colin Kent wrote:to answer my own post.just had a read up and there are several distys for this sort of thing.grabs my eyeBe careful! It's actually Ceph with a set of UI wrappers and CLI scripts. Ceph is an EXTREMELY complex solution and it's very hard to architect and manage. So there are two possible options (except not using Ceph at all LOL):1) You use Ceph and you LEARN it becoming super-pro in a reasonable time term.2) You give (1) to a people who're familiar with a commercial deployments already.Unfortunately 'simplifications' will lead unexperienced person to a conclusions it's all 'piece o' cake' and it's going to be sort of a disaster when his production Ceph will collapse.P.S.

Freenas Vmware Snapshots

Use Freenas For Vmware Vsan

This is exactly the same why I like FreeBSD + zfs but really hate all the UIs and fork-out from FreeBSD distros just to serve ZFS. ZFS isn't Ceph it's way way more mature, but the concept of (mis)using it is exactly the same.P.P.S. Alain3888 wrote:doommood wrote:alain3888 wrote:is there anyone out here that uses FreeNas as a 'VSAN' solution.we are looking in to this in more detail i see FreeNas a lot and we might give it a try.We are planning to install FreeNas on a vm with seperate vhds, is this best practice are are there other solutions (we are currenlty testing out Starwinds how it works on a hyper v host)Is there any option you could describe more about your production environment?

Probably the replies could be even more useful having more specifics about your case, what you have to run on the storage, kind of performance and estimations.Well we have 4 hyperv nodes with dedicated storage mix between ssds and sas drives. We are running win server 2012r2 std.Ideaal solution is that we have software that combines all of the storage and that we can have 1 repo for storage.Now we lose lots of time to check where we locate the vm the best base on storage capacity ram is no issue since we insert the maximum amount of ram in it.We can do that no problem like we did it for 3,000+ paying customers so far:)Ping me if you'll find evaluation not so straight fwd or whatever.Max (StarWind)Artem (StarWind). Dbeato wrote:alain3888 wrote:is there anyone out here that uses FreeNas as a 'VSAN' solution.we are looking in to this in more detail i see FreeNas a lot and we might give it a try.We are planning to install FreeNas on a vm with seperate vhds, is this best practice are are there other solutions (we are currenlty testing out Starwinds how it works on a hyper v host)Starwinds works amazing on Hyperv (Although they are moving away from that soon I believe). But I wouldn't use a NAS as a SAN or ISCSI target unless it is for a lab.Right, it's more about Microsoft pushing everybody from on premises to AzureThey keep shooting themselves in foot by destroying SMB on premises market VMware pretty much abandoned and let Microsoft own.

So I'll preface this by saying that myself and another admin are trying to resolve this issue. We didn't build the problem system this way, and we're new to FreeNAS. We're just trying to see if we can resolve this issue.

The original system builder is gone.I spent most of last night and this morning trying to come up to speed on FreeNAS stuff and troubleshoot as much as I can, so please forgive me (and correct me) if I use some incorrect terminology.The problemWe have a FreeNAS server that presents about 12 Terabytes of storage to a VSphere environment over iSCSI. The pool filled up, even though the overlayed VMFS only sees about half of the space used. We're not able to do anything with this NAS, as VMWare receives out of space errors when trying to write or use it.From all of my reading, I get why this is happening: VMWare allocated all of this space during thin provisioning, but the underlying ZFS has no way to know that its free when VMWare is done with it.

So from VMWare's perspective, there's still plenty of space. However, when it tries to write to that space, the CoW properties of ZFS prevent it from doing so because the pool is full from the ZFS point of view.My question: is there anything that we can do to fix this without having to migrate all of our VMs off and rebuild the NAS from scratch?Things that we have triedDeleting VMs: This obviously didn't work, since VMWare isn't actually freeing the space in a way that ZFS knows about. However, I wanted to note that we tried.Adding a thick VM: Someone suggested trying to thick provision lazy zero a VM, as that would zero out the space that VMWare isn't using. This failed and VMWare received an error that there was no more space available.UNMAP: Sending an UNMAP command from ESX using the 'esxcli storage vmfs unmap -l ' command.This failed, saying that 'Devices backing volume do not support UNMAP' My reading indicates that the volume must be sparse for this to work. I don't know if it was set up to be sparse, but I checked the refreservation and it was 'none', as described in these instructions:My counterpart noticed that the iSCSI extent is set to 'file' instead of 'device.'

Could that be why UNMAP is failing? My understanding is that FreeNAS 9.10.1-U4 does indeed support UNMAP.SetupOur FreeNAS version is 9.10.1-U4The volume is configured with 7 vDevs: 6 are mirrors, and 1 labeled 'cache' is striped. The mirrors each have two 2TB HDDs in them. The cache stripe has two 250GB HDDs.There is one single dataset? (I think that's the right term) configured under this volume.

Freenas On Esxi

It has all 12Terabytes of the storage allocated to it.This is shared over iSCSI as an extent of type 'file' with a size of all 12TB.Any help that anyone could provide is greatly appreciated. As I said: we didn't build it this way, and my reading of these forums indicates that this is very much a suboptimal configuration (i.e. More than 60% of the pool should never be in use, etc). We're just trying to get things working again.Thank you very, very much for your time.

Arguably incorrect; with NFS, if you go and delete a VM, ZFS is aware of the VMDK deletion and frees the space. As an iSCSI device, it may not, because it may have nothing to indicate to it that those blocks are now free. This is kind of what UNMAP was intended to address.

If you're lacking UNMAP because you're using a file-based extent, and your pool fills, then you're all sorts of screwed and you have to tread very carefully. This is another very good reason you really want to keep pool utilization below 50%. We use 6.0 with a variety of guests, both Windows (client and server) and Linux (typically Ubuntu and Cent, but there are others).Our storage needs are fairly simple for this cluster. It's sort of like a big ephemeral lab configuration, so VMs are short lived and not handling any 'important' workload. We're thinking Windows Server with NFS shares is going to work well (and indeed does work well for our other NAS servers that use it). While FreeNAS and ZFS are interesting for me, we just don't have the problems that ZFS solves, so we don't really need the additional administrative overhead of another technology.Thankfully, we were able to clean up some of the ephemeral VMs and Vmotion them off to another NAS while we rebuild this one. As I mentioned: this wasn't our configuration and my initial reading of the forums and documentation on here made me pretty quickly realize that this was a very sub-optimal configuration, and we're lucky that we have the Vmotion swing space to shuffle things around.Thanks again so much for the help everyone.