did you managed to find the solution for this ?
Re: cDOT NFS - .snaphot directly automounting
NFS (v3 and v4) mount gives access denied on junction path
Hi,
I am pretty new to Clustered Data Ontap. On our FAS2620 with NetApp Release 9.4P4 I created a export policy for a share with NTFS security. The strange thing is, that I can mount the share /vol/vol1, but not /vol/vol1/projects.
When I mount /vol/vol1 I can see the projects folder and I have the correct access rights with my user, but I would rather mount /vol/vol1/projects directly. When I try it on my Ubuntu 18.04 Linux client, I get an error message:
mount.nfs: access denied by server while mounting svm1:/vol/vol01/projects
On the other hand, mount of /vol/vol1 succeeds without problems.
What am I mssing?
Kind regards,
Andreas
Snap Reserve on CIFS shares
ONTAP 9.4, AFF300, autogrow/shrink enabled across the board, autodelete enabled on iSCSI LUN serving volumes only.
To reserve, or to not reserve?
Customer historically did 20% snap reserve on all volumes. I can see a snap reserve if there is an SLA which says a minimum number of snapshots must be retained. I enforce quotas on all my shares, so running out of space in the volume is not an issue.
It doesn't seem like setting the snap reserve is doing anything for me. Quotas, volume efficiency, autogrow/shrink, and autodelete should ensure I have adequate space for snapshots. At worst, they captivate storage.
Comments?
TIA
Re: NFS (v3 and v4) mount gives access denied on junction path
wrote: Hi,
I am pretty new to Clustered Data Ontap. On our FAS2620 with NetApp Release 9.4P4 I created a export policy for a share with NTFS security. The strange thing is, that I can mount the share /vol/vol1, but not /vol/vol1/projects.
When I mount /vol/vol1 I can see the projects folder and I have the correct access rights with my user, but I would rather mount /vol/vol1/projects directly. When I try it on my Ubuntu 18.04 Linux client, I get an error message:
mount.nfs: access denied by server while mounting svm1:/vol/vol01/projects
On the other hand, mount of /vol/vol1 succeeds without problems.
What am I mssing?
Kind regards,
Andreas
You can check perm issions with 'file-directory' CMD
vserver security file-directory show -vserver vs1 -path /vol/vol1/projects
Is the 'projects' folder created as a qtree or folder.
If qtree, check security style on it:
qtree show -vserver vs1
Also, on cDOT you have to add your export policy to the root namespace..but that should already been done as you can mount one layer up
issue with SVM root volume recovery and make-vsroot
Hello. Having an issue making Vserver root volume recover work with snapmirror in DP mode using make-vsroot on cdot 9.3 .
Its my inderstanding that once the the snapmirror relationship to the root volume is broken and you run make-vsroot against the DP volume that the DP volume becomes the Vservers root and all the volumes of that vserver will remount to the new volume.
From netapp documentation..
Result
When the new volume is promoted as the Vserver root volume, the other data volumes get associated with the new Vserver root volume.
Thats not what happens though. when i make the DP volume the new root it does apear to become the new root but all the junction paths still point to the original as orphaned. i have to unmount and remount them all..
so after running make-vsroot against my dp volume it does become the new root..
luster1::*> volume show -volume vs1_root_dp -instance
Vserver Name: vs1
Volume Name: vs1_root_dp
Aggregate Name: aggr1_node2
List of Aggregates for FlexGroup Constituents: aggr1_node2
Volume Size: 1GB
Name Ordinal: base
Volume Data Set ID: 1053
Volume Master Data Set ID: 2161074734
Volume State: online
Volume Style: flex
Extended Volume Style: flexvol
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: -
Group ID: -
Security Style: ntfs
UNIX Permissions: ------------
Junction Path: /
Junction Path Source: -
Junction Active: true
Junction Parent Volume: -
Vserver Root Volume: true
But the volumes that were mounted dont move over to it and are inaccesable..
Before new root volume..
Cluster1::*> volume show -vserver vs1 -fields junction-path
vserver volume junction-path
------- ------ -------------
vs1 FS1 /FS1
vs1 FS1A /FS1A
vs1 FS1B /FS1B
vs1 FS1C /FS1C
vs1 vs1_root
/
vs1 vs1_root_dp
After new root volume
Cluster1::*> volume show -vserver vs1 -fields junction-path
vserver volume junction-path
------- ------ --------------
vs1 FS1 (vs1_root)/FS1
vs1 FS1A (vs1_root)/FS1A
vs1 FS1B (vs1_root)/FS1B
vs1 FS1C (vs1_root)/FS1C
vs1 vs1_root
-
vs1 vs1_root_dp
/
Am i missing something here?
Thanks
Lp
Re: How to get Events older than 2 days?
No one answered whether it is possible to locate historical logs on CDOT? Two days worth is not far enough. Today is Dec 14th and I am searching for logs from Dec 8th? How long are logs kept? 4000 entry limit is not adequate.
Re: Compression
Hi
Volume size include snapshots as well. if you still have snapshots containing the files in the original form. you need to delete these snapshot to recover the space.
You also didn't mention what protocol and configuration your volume is mounted to windows.
if it's in a block base form, the file system(s) that manages the partition(s) also manages the empty blocks. and it might keep them full with garbage or zeroes. recent OS and Hypervisors (in case they have their own files system to host virtual disks) could tell ONTAP to mark a block as obsolete by sending an SCSI UNMAP command. if they do not - ONTAP assumes it's in used by the client regardless for the content (including garbage and zeros)
Gidi
Re: issue with SVM root volume recovery and make-vsroot
Hi
according to this article it's expected.
https://library.netapp.com/ecmdocs/ECMP1140387/html/GUID-3A6C32A1-EBE5-4F5C-A3EA-837BF6246A83.html
i been told a few times before not to bother and backup to vservers root volumes. as it's just as creating a new one. (load sharing is still beneficial for other reasons)
Gidi
Re: Snap Reserve on CIFS shares
Hi
if auto grow enabled i agree that it's not providing much benefit. but i do want to highlight:
1. if auto delete enabled (which you said sometimes do) and you are using snapmirror/vault .you need to also see what the policy is. first delete or first grow?. if it's to first delete. you may loose baseline snapshots for Snapmirror/Vault (i don't think the reserve would have prevent it. but don't know for sure)
2. i'm using the snap reserve full alerts from OCUM as indication for large amount of file delete or changes (to detect ransomware attacks)
Gidi
NFS v4 mount gives access denied on junction path
Hi,
I am pretty new to Clustered Data Ontap. On our FAS2620 with NetApp Release 9.4P4 I created a export policy for a share with NTFS security. The strange thing is, that I can mount the share /vol/vol1, but not /vol/vol1/projects.
When I mount /vol/vol1 I can see the projects folder and I have the correct access rights with my user, but I would rather mount /vol/vol1/projects directly. When I try it on my Ubuntu 18.04 Linux client, I get an error message:
mount -o sec=sys,vers=4.0 svm1:/vol/vol1/projects /mnt
mount.nfs: access denied by server while mounting svm1:/vol/vol1/projects
On the other hand, mount of /vol/vol1 succeeds without problems. When specifying NFS v3, I can mount both path, i.e. /vol/vol1 and /vol/vol1/projects.
What am I mssing?
Kind regards,
Andreas
Re: NFS (v3 and v4) mount gives access denied on junction path
wrote: Hi,
I am pretty new to Clustered Data Ontap. On our FAS2620 with NetApp Release 9.4P4 I created a export policy for a share with NTFS security. The strange thing is, that I can mount the share /vol/vol1, but not /vol/vol1/projects.
When I mount /vol/vol1 I can see the projects folder and I have the correct access rights with my user, but I would rather mount /vol/vol1/projects directly. When I try it on my Ubuntu 18.04 Linux client, I get an error message:
mount.nfs: access denied by server while mounting svm1:/vol/vol01/projects
On the other hand, mount of /vol/vol1 succeeds without problems.
What am I mssing?
Kind regards,
Andreas
You can check perm issions with 'file-directory' CMD
vserver security file-directory show -vserver vs1 -path /vol/vol1/projects
Is the 'projects' folder created as a qtree or folder.
If qtree, check security style on it:
qtree show -vserver vs1
Also, on cDOT you have to add your export policy to the root namespace..but that should already been done as you can mount one layer up
Re: NFS (v3 and v4) mount gives access denied on junction path
Hi,
It is a qtree with NTFS security style. When I mount the share with NFS v3 or the path /vol/vol1 with NFS v4, permissions work as expected. The default export policy on the root has been openend and I created a export policy for /vol/vol1/projects as well. I probably made a mistake there, but I have no idea what could be the problem and where to look.
I tried check-access:
svm::> check-access -vserver svm1 -volume vol1 -client-ip 10.1.1.100 -authentication-method sys -protocol nfs4 -access-type read-write
(vserver export-policy check-access)
Policy Policy Rule
Path Policy Owner Owner Type Index Access
----------------------------- ---------- --------- ---------- ------ ----------
/ default svm1_root
volume 1 read
/vol default svm1_root
volume 1 read
/vol/vol1 default vol1 volume 1 read-write
3 entries were displayed.
The result is the same vor nfs3.
Kind regards,
Andreas
Re: NFS (v3 and v4) mount gives access denied on junction path
As it´s NTFS security style and not Unix you could try to check the -ntfs-unix-security-ops {ignore|fail}] option.
You can read more about it here:
vserver export-policy rule show
But basically check with:
cluster::*> vserver export-policy rule show -vserver vs1 -fields ntfs-unix-security-ops vserver policyname ruleindex ntfs-unix-security-ops
-------------- ---------- --------- ----------------------
vs1 default 1 fail
If it says fail, change to ignore and test
Re: issue with SVM root volume recovery and make-vsroot
Thank you for the reply. Yes that is the behavior i am seeing. However according to this Netapp document on root server recovery there is no mention of having to unmount and remount. It leads to believe that follow these instructions and your recovered.
Not a very good way to recover if you a lot of mounted volumes. I did test the LS mirror recovery and it works as expected. Why I was told DP was the preferred protection method and why netapp configured our system this way is beyond me.
Thank you.
Restoring a Vserver's root volume
https://library.netapp.com/ecmdocs/ECMP1196906/html/GUID-3A6C32A1-EBE5-4F5C-A3EA-837BF6246A83.html
Re: NFS v4 mount gives access denied on junction path
I once ran into the same issue. You need to grant the NTFS right "Traverse folder/ execute file" to the "Authenticated User" group on your projects folder.
Re: cDOT NFS - .snaphot directly automounting
This is a normal behaviour for newer linux kernels, e.g. CentOS 7. When you access a snapshot directory it will show as a mount because a different Filesystem ID is presented for the Snapshot.
The mounts will be ended automatically when it is no longer accessed. Sometimes they go stale when processes keep them open for monitoring. E.g. SNMP Monitoring of free space.
There are three possible solutions for this:
- change the process that keeps accessing the mount, to stay above the unmount timeout
- lower the timeout for the automatic unmount "/proc/sys/fs/nfs/nfs_mountpoint_timeout"
- change ONTAP behaviour of presenting different FSIDs for snapshot:
vserver nfs modify -vserver $name -v3-fsid-change disabled vserver nfs modify -vserver $name -v4-fsid-change disabled
Additional information:
Re: Compression
Hi Gidi
It is a cifs volume inside a vfiler. I followed the below steps:-
1. I mapped the share present inside the volume on to a windows server as it is a CIFS NAS volume.
2. I had 2.5 TB of files. I added these files to multiple compressed (.zip) files and the size of zip files after compression was around 500 GB. I expected the size of the volume/vfiler to drop by 2 TB after the next snapmirror transfer is complete.
3. The snapmirror is configured to run every 2 hours. Therefore, once the snapmirror transfer is complete the size should have decreases by 2 TB.
4. The volume has deduplication enabled.
Please suggest if the zip file is showing me 500 GB after compression why does the size of volume is still the same.
Re: NFS (v3 and v4) mount gives access denied on junction path
I changed the option, but it made no difference. Just some more information:
rngx6786::> export-policy check-access -vserver svm1 -volume svm1_vol01 -client-ip 10.1.1.100 -authentication-method sys -protocol nfs4 -access-type read-write -qtree projects
Policy Policy Rule
Path Policy Owner Owner Type Index Access
----------------------------- ---------- --------- ---------- ------ ----------
/ default svm1_root
volume 1 read
/vol default svm1_root
volume 1 read
/vol/vol1 default svm1_vol01
volume 1 read
/vol/vol1/projects default svm1_vol01
volume 1 read-write
4 entries were displayed.
rngx6786::> qtree show
Vserver Volume Qtree Style Oplocks Status
---------- ------------- ------------ ------------ --------- --------
svm1 svm1_root "" ntfs enable normal
svm1 svm1_vol01
"" ntfs enable readonly
svm1 svm1_vol01
projects ntfs enable readonly
svm1 svm1_vol01
topics ntfs enable readonly
Re: NFS v4 mount gives access denied on junction path
The permissions are set like this, so it must be something different.
Thank you,
Andreas
snapmirror svm and vserver config override -command
Hi,
I’m using Snapmirror to migrate a SVM from a FAS8040 running 8.3.2P12 (populated with 10K disks) to an AFFA300 running 9.3P6 with the following command:
Destination_AFF::> snapmirror create -source-vserver VS1 -destination-vserver VS2 -type DP -throttle unlimited -policy DPDefault -schedule daily -identity-preserve true
It was important for us to use identity-preserve parameter but this fully provisioned (space-guarantee volume) some very large volumes which meant that some of the snapmirror transfers fail. I tried to set space-guarantee to none but I was unable to change the setting as the volume was part of an identity-preserve snapmirror relationship. I later found out that there was a diag level command
Destination_AFF::> vserver config override -command “vol modify -vserver VS2 -volume -space-guarantee none”
Which would allow me to change this thin provisioning setting (I wish I knew this before deleting the snapmirror and volume configuration!!!).
My question is, apart from the space-guarantee setting, is it possible to use the vserver config override -command to set volume efficiency to inline compression, inline-dedupe, and data-compaction to true if the source snapmirror volumes don’t have these settings? I just wanted to maximize the space saving capability of the AFFA300. This is a bit of a request I know but your assistance would be greatly appreciated.
Thanks