Is there a correlated RedHat bugID that we could reference?
Re: NFS: v4 server .. turned a bad sequence-id error!
Re: NFS v4 mount gives access denied on junction path
You are using nfs4..
Have you set up NFSv4 iddomain specified on the client and vServer??
https://kb.netapp.com/app/answers/answer_view/a_id/1030467/~/how-to-configure-nfsv4-in-cluster-mode-
Re: 9.4 Cluster LIF rename
Re: 9.4 Cluster LIF rename
I have test on simulator, I think it's LIFs name duplicate issue.
If you are already have LIF named cluster1-01_clus1 and cluster1-02_clus1, when you want rename cluster1-01_clus1 to cluster1-02_clus1, the system will not let you rename to cluster1-02_clus1, because the cluster1-02_clus1 is already exists, so the LIF name will auto change to cluster1-01_cluster1-02_clus1.
Re: 9.4 Cluster LIF rename
Re: 9.4 Cluster LIF rename
it does sound goofy. I'd open a case and see what support says.
Re: Content-Security-Policy HTTP header Not Implemented
In order to resolve the CSP Nessus result on port 443, open a support case and ask for assistance with the workaround for bug 1200750.
snapshots deleted
I'm trying to get familiar with Data Ontap using System Manager and 9.4. I noticed that I had hourly snapshots at 7, 8, 9,10 am. I did a restore from the 8am snapshot and then realized that it wiped out my 9am and 10am snapshots. I'm guessing this is normal behavior. But what if I later found out that I really should have used the 9am snapshot? Is there something I could have done to save those other snapshots just in case I later found out that I restored the wrong snapshot? Thanks.
Re: snapshots deleted
You can use FlexClone to create Flexclone volume from specific snapshot, and verify the data before you restore the volume
Re: 16Tb lun limit windows Ontap 9.3
wrote: It is not increased in 9.5, no.
Yes, using multiple extents would be one option, or using NFS instead of iSCSI.
Yes, but No.
One of the killing point for NetApp : Veeam backup & restore with SnapMirror/Vault and Exchange.
If you want to backup & restore Exchange mailboxes using SnapVault / SnapMirror you need to have a single VMFS datastore
Exchange is NOT supported on NFS datastore
If your mailboxes LUNs / VMDK are bigger than 16TB, you need to add extend to your existing datastore
Veeam is not able to restore when using VMFS extents.
So 16TB LUN limitation for Exchange when using Veeam is BIG issue and we have to migrate LUNs to another storage vendor.
How to lose a customer for such a ridiculous limitation...
Re: 16Tb lun limit windows Ontap 9.3
And VMFS extend anyway isn't a good practice, it' brings complexity to the SAN management, because of the asymetry between the storage and vSphere informations.
Knowing that MS filesystems support up to 16 exbibytes files ...
Snapmirror - Error: 13102 - No release-able destination found
Hi,
I've been trying to repair a couple of my snapmirrors that have failed to link up. I recreated a SQL backup and deleted the old broken snapmirror SQL link. Deleted find, but the next day it showed back up red "idle with restart checkpoint" "destination must be restricted...". When I try and delete it in the GUI "oncommand" I get the error. "Data ONTAP API Failed: Snapmirror error: No release-able destination found that matches those parameters. (Error: 13102)" When I SSH to the SAN I get a simalar error may not have permission to resource. The Vol it's trying to reach is deleted due to the actions yesterday. I read an old artical "Snapmirror Cleanup" stating to release it, yet i get the same error, "source is offline, is restricted, or does not exist".
How do I get ride of that snapmirror link?
Chad
Re: snapshots deleted
ok thanks. I will check out flex clone.
Qtree Ops and CIFS Ops
Hi Experts,
I am trying to understand the relation between Ops counts displayed on qtree stats command and total CIFS when running sysstat -x .
Suppose I have a filer with only one qtree and one cifs share
I did qtree stats -z
I had 10 CIFS operations per second for say, 1 minute ( 10 x 60 = 600 Ops total) reading from sysstat -x o/p
After that no more CIFS Ops
So now if I check my qtee stats, the CIFS Ops counter will be showing same 600 Ops?
Is this how it works? please help to understand.
Thanking you,
Re: hw_assist error
1.A newbie question, when I ssh into the nas-01 using the IP address. It still shows "nas::>". This means I am still at the cluster level?
A.Yes you are at the cluster level. You can know this by running the command on nas::>node show which will show the nodes in that cluster.
2. system node run -node nas-01 which brings me to "nas-01>" (notice at this level the commands are not the same at the cluster level).
A. if you run the command nas::>system node run -node nas-01, it will take you to the 7 mode command line from cluster mode where only limited commands work of 7-mode.
VSCAN server disconnect errors
Hi Experts,
We are having CIFS outages on our Filer environment (8.2.X 7-mode). Everything works normal if vcsan is truned off.
I am observing below erros on filer console (this filer AV was configured early, but vscan status is off currently). Like to know what could be the reason for these VCAN server disconnects?
PROD_FILER> Thu Dec 20 00:57:51 CST [PROD_FILER:cifs.server.errorMsg:error]: CIFS: Error for server \\AV-SERVER-7009: Error in session setup response STATUS_MORE_PROCESSING_REQUIRED.
Thu Dec 20 00:57:51 CST [PROD_FILER:cifs.server.infoMsg:info]: CIFS: Warning for server \\AV-SERVER-7009: Connection terminated.
Thu Dec 20 00:57:51 CST [PROD_FILER:vscan.server.connectError:error]: CIFS: An attempt to connect to vscan server \\AV-SERVER-7009 failed [0xc0000016].
Thu Dec 20 00:57:51 CST [PROD_FILER:vscan.dropped.connection:warning]: CIFS: Virus scan server \\AV-SERVER-7009 (10.0.20.37) has disconnected from the filer.
Thu Dec 20 00:58:07 CST [PROD_FILER:vscan.server.connecting.successful:info]: CIFS: Vscan server \\AV-SERVER-7009 registered with the filer successfully.
Thu Dec 20 01:00:00 CST [PROD_FILER:kern.uptime.filer:info]: 1:00am up 203 days, 8:44 60 NFS ops, 30141636660 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iSCSI ops
Thu Dec 20 01:00:01 CST [PROD_FILER:snmp.traphost.resolve.failed:error]: snmp: cannot send traps to 'cbjsvr3033.company.com' because it could not be resolved via DNS. Retries occur hourly.
PROD_FILER> Thu Dec 20 01:03:21 CST [PROD_FILER:cifs.server.errorMsg:error]: CIFS: Error for server \\AV-SERVER-7009: Error in session setup response STATUS_MORE_PROCESSING_REQUIRED.
Thu Dec 20 01:03:21 CST [PROD_FILER:cifs.server.infoMsg:info]: CIFS: Warning for server \\AV-SERVER-7009: Connection terminated.
Thu Dec 20 01:03:21 CST [PROD_FILER:cifs.server.errorMsg:error]: CIFS: Error for server \\AV-SERVER-7009: CIFS Session Setup Error STATUS_MORE_PROCESSING_REQUIRED.
Thu Dec 20 01:03:21 CST [PROD_FILER:vscan.server.connectError:error]: CIFS: An attempt to connect to vscan server \\AV-SERVER-7009 failed [0xc0000016].
Thu Dec 20 01:03:21 CST [PROD_FILER:vscan.dropped.connection:warning]: CIFS: Virus scan server \\AV-SERVER-7009 (10.0.20.37) has disconnected from the filer.
Thu Dec 20 01:03:35 CST [PROD_FILER:vscan.server.connecting.successful:info]: CIFS: Vscan server \\AV-SERVER-7009 registered with the filer successfully.
Thu Dec 20 01:08:51 CST [PROD_FILER:cifs.server.errorMsg:error]: CIFS: Error for server \\AV-SERVER-7009: Error in session setup response STATUS_MORE_PROCESSING_REQUIRED.
Thu Dec 20 01:08:51 CST [PROD_FILER:cifs.server.infoMsg:info]: CIFS: Warning for server \\AV-SERVER-7009: Connection terminated.
Thu Dec 20 01:08:51 CST [PROD_FILER:vscan.server.connectError:error]: CIFS: An attempt to connect to vscan server \\AV-SERVER-7009 failed [0xc0000016].
Thu Dec 20 01:08:51 CST [PROD_FILER:vscan.dropped.connection:warning]: CIFS: Virus scan server \\AV-SERVER-7009 (10.0.20.37) has disconnected from the filer.
Thu Dec 20 01:09:03 CST [PROD_FILER:vscan.server.fqdn.unavail:error]: CIFS: Could not determine Fully Qualified Domain Name (FQDN) of virus vcanning (vscan) server (10.0.20.37). If the vscan server is running on the Microsoft Longhorn OS, the storage system requires vscan server FQDN for authenticating itself to the vscan server.
Thu Dec 20 01:09:03 CST [PROD_FILER:vscan.server.connecting.successful:info]: CIFS: Vscan server \\AV-SERVER-7009 registered with the filer successfully.
PROD_FILER>
PROD_FILER>
PROD_FILER> Thu Dec 20 01:14:21 CST [PROD_FILER:cifs.server.errorMsg:error]: CIFS: Error for server \\AV-SERVER-7009: Error in session setup response STATUS_MORE_PROCESSING_REQUIRED.
Thu Dec 20 01:14:21 CST [PROD_FILER:cifs.server.infoMsg:info]: CIFS: Warning for server \\AV-SERVER-7009: Connection terminated.
Thu Dec 20 01:14:21 CST [PROD_FILER:vscan.server.connectError:error]: CIFS: An attempt to connect to vscan server \\AV-SERVER-7009 failed [0xc0000016].
Thu Dec 20 01:14:21 CST [PROD_FILER:vscan.dropped.connection:warning]: CIFS: Virus scan server \\AV-SERVER-7009 (10.0.20.37) has disconnected from the filer.
Thu Dec 20 01:14:32 CST [PROD_FILER:vscan.server.connecting.successful:info]: CIFS: Vscan server \\AV-SERVER-7009 registered with the filer successfully.
Re: snapshots deleted
So I tried the flexclone using System Manager. This time I would restore the 1205 snap but knowing it would wipe out the 1305 snap. So I did a Flexclone of the 1305 snap to keep it for later use if needed. Then when I tried my restore with the 1205 snap it refused saying - Data ONTAP API Failed: Failed to promote Snapshot copy "hourly.2018-12-20_1205" because one or more newer Snapshot copies are currently used as a reference Snapshot copy for data protection operations: snapmirror.d8c3698c-03e4-11e9-b423-000c2942bb2d_2151905657.2018-12-20_164500. (Error: 13001).
So I thought maybe I should split the flexclone but after split I got the same error when trying to restore the 1205 snap. Any ideas what I am doing wrong. Or is there a better way to from losing the 1305 snap when I restore the 1205 snap? Thanks
Re: snapshots deleted
In this case, you can do for two way
1.Create two FlexClone volume for snapshot 1205 and 1305, and verify the data, then decide which snaphot should use for restore.
or
2.Create FlexClone volume for snapshot 1305, and splitting the FlexClone volume for backup if needed for later, and restore snapshot 1205(after split 1305 FlexClone volume, you can restore the 1205 snapshot).
OID for quota reporting
Hi All,
I would like to setup a custom snmp trap which will report as soon an my quota is 70% (a soft quota reporting will also work).
Which of these traps I need to be using for this purpose?
qrFileLimit snmp.1.3.6.1.4.1.789.1.4.3.1.7.1
qrFilesUsed snmp.1.3.6.1.4.1.789.1.4.3.1.6.1
qrId snmp.1.3.6.1.4.1.789.1.4.3.1.3.1
qrIndex snmp.1.3.6.1.4.1.789.1.4.3.1.1.1
qrKBytesLimit snmp.1.3.6.1.4.1.789.1.4.3.1.5.1
qrKBytesUsed snmp.1.3.6.1.4.1.789.1.4.3.1.4.1
qrPathName snmp.1.3.6.1.4.1.789.1.4.3.1.8.1
qrType snmp.1.3.6.1.4.1.789.1.4.3.1.2.1
qrV264KBytesLimit snmp.1.3.6.1.4.1.789.1.4.6.1.26.1
qrV264KBytesSoftLimit snmp.1.3.6.1.4.1.789.1.4.6.1.28.1
qrV264KBytesThreshold snmp.1.3.6.1.4.1.789.1.4.6.1.27.1
qrV264KBytesUsed snmp.1.3.6.1.4.1.789.1.4.6.1.25.1
qrV2FileLimit snmp.1.3.6.1.4.1.789.1.4.6.1.11.1
qrV2FileQuotaUnlimited snmp.1.3.6.1.4.1.789.1.4.6.1.10.1
qrV2FilesUsed snmp.1.3.6.1.4.1.789.1.4.6.1.9.1
qrV2HighKBytesLimit snmp.1.3.6.1.4.1.789.1.4.6.1.7.1
qrV2HighKBytesSoftLimit snmp.1.3.6.1.4.1.789.1.4.6.1.21.1
qrV2HighKBytesThreshold snmp.1.3.6.1.4.1.789.1.4.6.1.18.1
qrV2HighKBytesUsed snmp.1.3.6.1.4.1.789.1.4.6.1.4.1
qrV2Id snmp.1.3.6.1.4.1.789.1.4.6.1.3.1
qrV2IdType snmp.1.3.6.1.4.1.789.1.4.6.1.15.1
qrV2Index snmp.1.3.6.1.4.1.789.1.4.6.1.1.1
qrV2LowKBytesLimit snmp.1.3.6.1.4.1.789.1.4.6.1.8.1
qrV2LowKBytesSoftLimit snmp.1.3.6.1.4.1.789.1.4.6.1.22.1
qrV2LowKBytesThreshold snmp.1.3.6.1.4.1.789.1.4.6.1.19.1
qrV2LowKBytesUsed snmp.1.3.6.1.4.1.789.1.4.6.1.5.1
qrV2PathName snmp.1.3.6.1.4.1.789.1.4.6.1.12.1
qrV2QuotaUnlimited snmp.1.3.6.1.4.1.789.1.4.6.1.6.1
qrV2Sid snmp.1.3.6.1.4.1.789.1.4.6.1.16.1
qrV2SoftFileLimit snmp.1.3.6.1.4.1.789.1.4.6.1.24.1
qrV2SoftFileQuotaUnlimited snmp.1.3.6.1.4.1.789.1.4.6.1.23.1
qrV2SoftQuotaUnlimited snmp.1.3.6.1.4.1.789.1.4.6.1.20.1
qrV2ThresholdUnlimited snmp.1.3.6.1.4.1.789.1.4.6.1.17.1
qrV2Tree snmp.1.3.6.1.4.1.789.1.4.6.1.14.1
qrV2Type snmp.1.3.6.1.4.1.789.1.4.6.1.2.1
qrV2Volume snmp.1.3.6.1.4.1.789.1.4.6.1.13.1
qrV2VolumeName snmp.1.3.6.1.4.1.789.1.4.6.1.29.1
qrV2Vserver snmp.1.3.6.1.4.1.789.1.4.6.1.30.1
qrVFileLimit snmp.1.3.6.1.4.1.789.1.4.5.1.7.1
qrVFileLimitSoft snmp.1.3.6.1.4.1.789.1.4.5.1.15.1
qrVFilesUsed snmp.1.3.6.1.4.1.789.1.4.5.1.6.1
qrVId snmp.1.3.6.1.4.1.789.1.4.5.1.3.1
qrVIdType snmp.1.3.6.1.4.1.789.1.4.5.1.11.1
qrVIndex snmp.1.3.6.1.4.1.789.1.4.5.1.1.1
qrVKBytesLimit snmp.1.3.6.1.4.1.789.1.4.5.1.5.1
qrVKBytesLimitSoft snmp.1.3.6.1.4.1.789.1.4.5.1.14.1
qrVKBytesThreshold snmp.1.3.6.1.4.1.789.1.4.5.1.13.1
qrVKBytesUsed snmp.1.3.6.1.4.1.789.1.4.5.1.4.1
qrVPathName snmp.1.3.6.1.4.1.789.1.4.5.1.8.1
qrVSid snmp.1.3.6.1.4.1.789.1.4.5.1.12.1
qrVTree snmp.1.3.6.1.4.1.789.1.4.5.1.10.1
qrVType snmp.1.3.6.1.4.1.789.1.4.5.1.2.1
qrVVolume snmp.1.3.6.1.4.1.789.1.4.5.1.9.1
qtreeExportPolicy snmp.1.3.6.1.4.1.789.1.5.10.1.10.1
qtreeId snmp.1.3.6.1.4.1.789.1.5.10.1.4.1
qtreeIndex snmp.1.3.6.1.4.1.789.1.5.10.1.1.1
qtreeIsExportPolicyInherited snmp.1.3.6.1.4.1.789.1.5.10.1.11.1
qtreeName snmp.1.3.6.1.4.1.789.1.5.10.1.5.1
qtreeOplock snmp.1.3.6.1.4.1.789.1.5.10.1.8.1
qtreeStatus snmp.1.3.6.1.4.1.789.1.5.10.1.7.1
qtreeStyle snmp.1.3.6.1.4.1.789.1.5.10.1.6.1
qtreeVolume snmp.1.3.6.1.4.1.789.1.5.10.1.2.1
qtreeVolumeName snmp.1.3.6.1.4.1.789.1.5.10.1.3.1
qtreeVserver snmp.1.3.6.1.4.1.789.1.5.10.1.9.1
quotaInitPercent snmp.1.3.6.1.4.1.789.1.4.2.0
I am getting error when most of the above OIDs are tried, for example
Filer*> snmp traps test.var snmp.1.3.6.1.4.1.789.1.4.3.1.7.1
Error--var spec couldn't be retrieved from database: snmp.1.3.6.1.4.1.789.1.4.3.1.7.1
Changing Cluster Name and Physical Node Name for cDOT 9.1 Non disruptively
Hi,
I am sure changing Cluster & Node Name should be NDU , but can someone who has done it or NetApp confirm the same ?
As far as SnapMirror is concenred, it happens via SVM, which which are not renaming at all.
We just want to change the Cluster Name & Node Name.
I am guessing, I will have to re-add the Cluster to : OCUM, as it was added via Hostname.
Thanks,
Ash