Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19259 articles
Browse latest View live

Re: QoS minimum / floor implementation

$
0
0

: We are using a 4 Node cluster with 2 FAS and 2 AFF using Ontap 9.5. The problems remains:

- Configuring a floor/minimum AQoS rule on a  volume on rotating spindles (FAS) works perfectly.

- Configuring a floor/minimum QoS rule on the same volume cannot be configured.

- Configuring a floor/minimum QoS rule on the same cluster on AFF works perfectly.

Question: Is there a technical reason behind?

 

 

 


Re: QoS minimum / floor implementation

$
0
0

  :I am sure engineering must have given enough thought and testing to state that 'QoS' minimum only applies to AFFs. I haven't configured or worked with AdaptiveQoS so can't really comment on that. However, I think the whole game of QoS minimum or maximum consists of complex algorithms and logic. How quickly system responds is probably one of the key factors to how QoS ratebucket rebalancing works. With differnet workloads and rate of client traffic it makes it difficult to compute and distribute the credits. B'cos AFFs are all flash, that means they are quick in their response time and to adjust their IOs, which is probably not realiably possible with spinning disks. Having said that, this is just my interperation and I may be completely wrong on this. I am sure NetApp will come back to your query in due course. Thanks!

Re: How to identify Cold/Archive/Infrequent data from Netapp Volumes on FAS systems?

$
0
0

Thank you Sergey.

 

I wasn't aware of "IDR can be enabled on non-FabricPool aggregates using the ONTAP CLI. This includes HDD aggregates starting in ONTAP 9.6.".  

 

Hopefully it reports detail info about File name, Size, Access & Moodified tim. I'll try it out.

 

Does XCP provide a similar funtion?

 

Regards,

Ashwin

Re: How to identify Cold/Archive/Infrequent data from Netapp Volumes on FAS systems?

$
0
0

Hello,

 

"Hopefully it reports detail info about File name, Size, Access & Moodified tim. I'll try it out."

 

Nope, it is internal Ontap feature which only dipslays how much data is inactive on a Netapp volume.

Please bear in mind that  FabricPool operates at the block level from the ONTAP point of view, but the effective results enable you to tier entire files as well as blocks within a larger file.

Please have a look on Fabric Pool TRs 

 

https://www.netapp.com/us/media/tr-4598.pdf

https://www.netapp.com/us/media/tr-4695.pdf

 

Re: Not reclaiming space at the volume level.

$
0
0

The default setting is disabled. Is there any reason not to enable it?

Different Inode number across snapshots

$
0
0

Hi,

We are working with NAS Snapshots on FAS2552, OnTap 8.3.2 .

Once the Volume snapshots are taken the Snapshots are NFS mounted on a host from where the contents are copied to a secondary storage. At this point we are observing that the inode number of files are changing in different snapshots. For e.g., a file F1 under snapshot 1 is different than the one under snapshot 2. Interestingly this file is not touched at all between the snapshots.

More so, this behavior is observed only when the snapshot is mounted on NFS v3.

 

See the below outputs -

----- NFS Version 3  --------

10.xx.xx.147:/demo_nas_vol1 on /ver3_mnt type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,....addr=10.xx.xx.147)

[root@nfshost ~]# stat -c %i /ver3_mnt/.snapshot/snap1_for_inodeTest/fileone.txt
858188230
[root@nfshost ~]# stat -c %i /ver3_mnt/.snapshot/snap2_for_inodeTest/fileone.txt
858188486

 

----  NFS Version 4  -----
10.xx.xx.147:/demo_nas_vol1 on /ver4_mnt type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,....addr=10.xx.xx.147)

[root@nfshost ~]# stat -c %i /ver4_mnt/.snapshot/snap1_for_inodeTest/fileone.txt
13510
[root@nfshost ~]# stat -c %i /ver4_mnt/.snapshot/snap2_for_inodeTest/fileone.txt
13510

 

So my question is - 

1. Is this expected behavior

2. What can be done to have consistent behavior

Re: Different Inode number across snapshots

$
0
0

Hi,

 

See what they say about FSID in snapshots on page 71.

https://www.netapp.com/us/media/tr-4067.pdf

 

Just reiterate in case you don't read the whole document - For the FSID disable/enable please note:

Note:NetApp does not recommend changing this option unless directed by support. If this option is changed with clients mounted to the NFS server, data corruption can take place

Re: Different Inode number across snapshots

$
0
0

Thanks Marcus,

This will still not resolve the Inode issue I am observing while calling stat at the client end, because essentially, for NFSv3, FSID of a file is returned and this FSID is a combination of inode, volume identifier and Snapshot IDs.

However for NFSv4 if v4-fsid-change is enabled, the inode number is returned as FSID instead of FSID.

 


Re: Different Inode number across snapshots

$
0
0

Hi,

 

Your observation is correct!

 

So my question is - 

1. Is this expected behavior : Yes

2. What can be done to have consistent behavior : NFSv4

 

This is a known bug:
https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=933937

 

Bug is closed, will not be fixed:
At this time, this issue is not scheduled to be implemented or corrected in an upcoming release of the affected NetApp product.

 

Workaround:
To have identical FileIDs for a file across versions, use an NFSv4 mount instead of an NFSv3 mount.

 

Note: The NFS protocol does not require FileIDs for a file to be identical across versions of that file. Therefore, this is not a violation of the NFS protocol.

 

Thanks!

Re: OnTAP 9.6 and 7MTT

$
0
0

How do Netapp propose to handle customers that have data on 7 mode devices, in getting that data across to there new Netapp devices / data fabric?

 

Broadcast Domains and Failover Groups

$
0
0

Hi,

 

I'm hoping someone can help me understand the benefit of the setup below. It was set up by a third party.

 

In this config, there are no overlapping IP address and there are 2 xSVMs which are members of the 'Default' IPspace

  • The Default Broadcast Domain (BD) and Failover Group (FG) contains offline ports, so a redundant configuration
  • The 'Cluster-Management' BD and FG contains both the physical and VLAN ports
  • There is a dedicated BD and FG 'Data' for all of the physical ports in use
  • The 'SVM%-Data' BD and FGs contain only the VLAN ports

 

VserverGroupTargets
----------------------------------------------------------------------------
CLUSTER  
 Cluster-Management 
  N01:e0M,
  N01:e0e-1111,
  N01:e0f-1111,
  N01:e0g-1111,
  N01:e0h-1111,
  N02:e0M,
  N02:e0e-1111,
  N02:e0f-1111,
  N02:e0g-1111,
  N02:e0h-1111
 Data 
  N01:e0e,
  N01:e0f,
  N01:e0g,
  N01:e0h,
  N02:e0e,
  N02:e0f,
  N02:e0g,
  N02:e0h
 Default 
  N01:e0c,
  N01:e0d,
  N02:e0c,
  N02:e0d
 SVM2-Data 
  N01:e0e-2222,
  N01:e0f-2222,
  N01:e0g-2222,
  N01:e0h-2222,
  N02:e0e-2222,
  N02:e0f-2222,
  N02:e0g-2222,
  N02:e0h-2222
 SVM1-Data 
  N01:e0e-3333,
  N01:e0f-3333,
  N01:e0g-3333,
  N01:e0h-3333,
  N02:e0e-3333,
  N02:e0f-3333,
  N02:e0g-3333,
  N02:e0h-3333
 SVM2-Management 
  N01:e0e-4444,
  N01:e0f-4444,
  N01:e0g-4444,
  N01:e0h-4444,
  N02:e0e-4444,
  N02:e0f-4444,
  N02:e0g-4444,
  N02:e0h-4444
 SVM1-Management 
  N01:e0e-5555,
  N01:e0f-5555,
  N01:e0g-5555,
  N01:e0h-5555,
  N02:e0e-5555,
  N02:e0f-5555,
  N02:e0g-5555,
  N02:e0h-5555

 

  1. Is there any reason to deviate from using the 'Default' Broadcast Domain and Failover Group?
  2. Should the asscoated VLAN and physical ports that the LIFS are allowed to fail over to be contained in the same BD/FG?

Thanks

 

Re: Broadcast Domains and Failover Groups

$
0
0

Hi.

 

  1. Is there any reason to deviate from using the 'Default' Broadcast Domain and Failover Group?
    1. No - the Default BD and FG used by most of the customers. 
  2. Should the asscoated VLAN and physical ports that the LIFS are allowed to fail over to be contained in the same BD/FG?
    1. No - there's no reason the physical group to be in any BD/FG (unless you for some reason using the native Vlan) . And only a single VLAN have to be present in any DB/FG

Having said all the  -your configuration doesn't seem to be "wrong". you can have extra BD/FG for the physical ports... (mainly in case thy have native vlan). and you are allowed not to use the default DB/FG if you don't wish to.

 

Taking the fact you are using failover group, i assume this is not an iSCSI setup, right?

if so, it is a bit alarming that you don't seem to be using LACP at all - instead - you trust ONTAP to detect the port is down and move the LIFs over. i think LACP give you better/faster tolerance and more throughput + workload spreading across the switches.  the failover between the ports/switches will also be more seamless to some clients if it's done on the 2nd OSI layer (LACP) rather the 3rd (LIF failvoer).

Re: Broadcast Domains and Failover Groups

$
0
0

Thanks. It's not iSCSI.

 

The system was set up by a NetApp reseller.

 

I think the BD/FG per VLAN is required. Say for example, you had a single BD/FG containing VLAN ports from two separate VLANs - they wouldn't be able to fail over to each other. As a consequence, you would need two DB/FGs, one for each VLAN.

 

Is that correct?

Re: Broadcast Domains and Failover Groups

Re: Broadcast Domains and Failover Groups

$
0
0

Adding my take here.

Broadcast-domains should always contain ports that are similar and can always fail over to each other. They should NOT contain any unused ports or ports on different VLANs (as already indicated). Why? in a LIF failover scenario, if configured, can go to any port in the Broadcast Domain. I have had customer accidentally leave everything in the same Broadcast Domain and when the switch failed, the port moved to another on a differetn VLAN (so link was good, networking, not so much) and all communications stopped.

 

When you configure a LIF to have a "failover-group" (which is a leftover term from before Clustered ONTAP 8.3 ), and you have the LIF failover policy set to "broadcast-domain-wide" then all ports in the broadcast-domain *must* be in the same VLAN.

 

Depending on the platform, I might do something like have e0M and e0j in the same Broadcast-domain. e0M is active and e0j is hooked up and on the same VLAN (access-port on the switch) but just not in active use.

 

For the BD groups, when defining VLANs, I try to keep all the ports in the BD with the same tagged VLAN. In the example, there is reference to e0M and e0x-1111 in the same BD. Not ideal from the standpoint that Active-IQ and even Config Advisor will through a caution/warning since the LIF failover will switch from an access-port to a tagged vlan. The other examples appear to be fine (expect I would merge all four ports into an LACP channel and then tag the VLAN on the ifgrp)

 

With all that said, if you have a 100% flat network and do not use any VLAN tags, then you can probably get by with the Default.

 

If you have multiple subnets and/or you have VLANs, if you want failover to work without an issue, you MUST define a Broadcast-Domain for each Network (i.e. each subnet, like 192.168.1.0/24 and 192.168.2.0/24). Additionaly, if you are doing things like VMware over NFS or iSCSI, it is usually best to use an MTU of 9000 (as long as the network infrastructure supports it!) and you can define the MTU at the broadcast domain.


Re: Not reclaiming space at the volume level.

$
0
0

I can't think of any reason not to enable it if your use case supports it (which it sounds like it does) and the workload consuming space on the LUN can accommodate the downtime needed to offline the LUN and enable the setting. 

Offline Autosupport Files

$
0
0

I have logged a case with NetAPP for a performance issue on one of my arrays. This is an offline system,  completely air-gapped from the internet.

 

I have been able to generate the autosupport report easily, and can browse to it via HTTP. However this is hundreds of files, which I cannot find a way to get off the system as a bundle.

 

Can anyone advise the best way to zip them up or access them from my windows laptop (windows monkey here, not great with Linux!!).

 

Thanks in advance

Re: Downgrade or revert new AFF A300 from 9.6 to 9.5

$
0
0

FYI to those interested, we contacted support and after some digging they fouind a stale smf table entry blocking new nodes trying to joing the cluster.  

 

Although Nodes 3 and 4 had the same version of ONTAP as the existing cluster they would not join because there was a stale entry in the smf tables. 

 

xxxxxxx::*> debug smdb table cluster_version_replicated show
uuid generation major minor version-string date ontapi-major ontapi-minor is-image-same state
------------------------------------ ---------- ----- ----- -------------------------------------------------- ------------------------ ------------ ------------ ------------- -----
3101b6df-7cec-11e5-8e37-00a0985f3fc6 8 3 0 NetApp Release 8.3P1: Tue Apr 07 16:05:35 PDT 2015 Tue Apr 07 12:05:35 2015 1 30 true none
57c64277-7cec-11e5-8e37-00a0985f3fc6 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none
833939e5-7cd5-11e5-b363-396932647d67 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none
f4270dd4-7cd3-11e5-a735-a570cc7c464a 9 5 0 NetApp Release 9.5P5: Fri Jun 14 15:33:34 UTC 2019 Fri Jun 14 11:33:34 2019 1 150 true none
4 entries were displayed.

 

There should be an entry for each node in the Cluster plus 1 entry for the Cluster.  

 

We removed the 8.3P1 entry:

::*> debug smdb table cluster_version_replicated delete -uuid 3101b6df-7cec-11e5-8e37-00a0985f3fc6

 

Note:  Do not edit the smf tables without guidance from NetApp Support. 

Re: 【Ontap select】 NodeCreateFailed, ClusterDeployFailed

$
0
0

I am encountering the error with a customer on Ontap Select 2.12.  Was a resolution found?

Re: Offline Autosupport Files

Viewing all 19259 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>