Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19214 articles
Browse latest View live

Re: Advanced disk partition for FAS8200

$
0
0

Hi

 

The split can also be to two data partitions (depends on the disk size and type) this is to allow a better split between nodes.

you can see the exact configuration it will create per disk type and count in HWU https://hwu.netapp.com

(select platform > FAS > model > OS > and when the results displayed look for the line "ADP Root Partition Configuration"

 

there are partitioned spares - and non partitioned spares in the system. you can see the configuration in the table of HWU. in general the system will try to always keep that number of spare partitioned drives, by partitioning a non-partitioned spare disk.

 

New disks added to the selves are assigned to a node just show as normal spares - they only try to partition under the following terms:

https://kb.netapp.com/app/answers/answer_view/a_id/1001793

seems that if you add a disk to a RAID group with partitioned drives, it will try to partition that drive as well

https://library.netapp.com/ecmdocs/ECMLP2427462/html/GUID-67EE20CC-446F-44B2-8164-E31CDC879839.html

 

i'm not aware of wide limitation. it may force the RAID configuration to look a bit different then you planned (like if your system can only have 24 partitioned disks - you being forced to have something like 11 or 22 disks in RAID group, compare to an old standard of let's say - 16)

 

Gidi

 


Re: Backplane Traversal

$
0
0

Hi

 

Unlike 7 mode (for FCP only as well) the data for the incorrect path is not going on the IC/PB. it goes via the cluster network, by default on the AFF200 it's ports e0A and e0B which are 10GB each  (can also use other ports - on the same speed).

it should add very small latency (around 1ms).

 

as NFS v3 is stateless. i would place the LIF on the same location as the volumes. and let the LIF failover to handle redundancy. in NFS v4 or if pNFS involved - need to read some docs, but in general from my experience the LIFS fails-over faster than the aggregates... (assuming ARP cache on the switches not creating any problems).

 

Gidi

Re: On-Tap Upgrade without internet access

$
0
0

IIS works for me to deploy ONTAP for years.... note that from ONTAP 9.4 or 9.5 (can't remember exactly). you don't need a web server anymore, just upload using system manager.

Re: Backplane Traversal

$
0
0

 wrote:

it should add very small latency (around 1ms).


This is extremely HUGE latency for an all flash array.

Re: Backplane Traversal

$
0
0

Know the saying - Never assume it's a one way street?

 

I just tried running a 4k 100%  random  100 % read on a IOmeter load, while failing over the LIF to the other controller.

 

Either the 10Gbe network is the limiting factor or the indirect connection is not being affected by the failover.

24x 960GB SSD

 

fail.pngdirect.png

 

 

 

 

Re: On-Tap Upgrade without internet access

$
0
0

Correct for ONTAP. Unfortunately for Disk Firmware, Shelf Firmware and Disk Qualification Package you still need the Web server or use the alternative method and use SCP (in the appropriate page for each listed)

 

4-ports SAS Host Adapter, which two ports should be used to create a stack

$
0
0

I have following adapter, and if I create a stack of disk/shelf (ds224-12), among 4a/4b/4c/4d, which two ports should be used to create the stack and why?

MFG Part Number: NetApp, Inc. 110-00401 rev. B0
Part number: 111-02026+B0

 

Thanks!

Re: 4-ports SAS Host Adapter, which two ports should be used to create a stack


Re: 4-ports SAS Host Adapter, which two ports should be used to create a stack

$
0
0

If you ONLY have a single 4-port adapter, then you use

4a/4d Stack 1

4b/4c Stack 2

 

The card has two ASICs on it. One controls ports a/b and the other ports c/d.

This way is one asic fails, the stack does not.

 

Now, referring to the cable guide if you have more than 4-ports, you should use ports from all sources.

slots 0  and 4 for instance, you would/should do:

 

0a / 4b

4a / 0d 

0c / 4d 

4c / 0b

 

Deleting stale cluster peers, ONTAP 9.1 & 9.3

$
0
0

Hello, friends. I have two clusters who once had SnapMirror relationships, but we're breaking up the band. I deleted the SnapMirror relationships, vserver peers, and cluster peers on the destination side but failed to do anything like that on the source side. Now there's cluster peer relationship and several stale SnapMirror relationships visible on the source side that I can't delete.

 

  • Attempting to delete the cluster peer results in "Nope: A Snapmirror destination exists in this cluster." 
  • "snapmirror list-destinations" on the source cluster results in no entries
  • "snapmirror show" on the source cluster shows a few stale entries where the destination path is on the old destination cluster in paths that no longer exist on the destination cluster.
  • No snapmirror commands on the source cluster work when trying to affect the stale destination paths on the destination cluster

Thankfully some network connectivity still exists, because I was able to recreate the cluster peer relationship. The destination vserver on the destination cluster has since been deleted, however, so further commands to "snapmirror release" on the source cluster still fail because it can't find the destination vserver. 

 

Anyway, that's all quite a lot. Really the goal here is to delete (forcibly, if necessary) the cluster peer relationship. Any ideas to get rid of these stale dependencies?

 

Thanks!

Re: Deleting stale cluster peers, ONTAP 9.1 & 9.3

$
0
0

Have you tried the "snapmirror release -destination-path <path> -force true" command from the source cluster? Should forcibly remove the relationship. 

Re: Deleting stale cluster peers, ONTAP 9.1 & 9.3

$
0
0

I have, yeah. Any "snapmirror release" command from the source cluster (at any permission level, admin/advanced/diag) results in 

Error: command failed: Failed to get information for Vserver [deleted destination vserver]. (entry doesn't exist)

 

 

Re: Deleting stale cluster peers, ONTAP 9.1 & 9.3

$
0
0

Hmm, in that case you may need to temporarily rebuild the destination SVM/DP vol, re-initialize the relationship, and then perform the snapmirror release to get everything to clean up properly.

 

That's what I would try, someone else might chime in with a better idea though. Smiley Happy  

 

 

How to move a snapmirror source volume to another cluster in CDOT

$
0
0
Hello, I'm looking to move a snapmirror source volume from Cluster A to Cluster C without rebalancing the original snapmirror relationship between Cluster A and Cluster B. I've mirrored the volume between Cluster A and Cluster C, broken all the mirrors, and tried to resync the mirror between Cluster C and Cluster B (because they should have a common snapshot), but it's complaining about "Error: command failed: Source Cluster C:mirror_test must match with the source volume in the SnapMirror relationship" I've done this a hundred times in 7 mode, but I cannot find an equivalent CDOT document: https://kb.netapp.com/app/answers/answer_view/a_id/1035002/~/how-to-physically-relocate-a-volume-snapmirror-destination-while-preserving-the

Tags for ontap objects like shares, volumes, qtrees to improve documentation and researches

$
0
0

If I have more than 500 volumes and many shares or exports, it would be more convenient if I could tag some objects with company-internal attributes.

 

For some objects I can use the option "-comment", but it would be simpler if ontap cli or the system manager by default want to support tagging in a separate field.This makes it easier for me to research my surroundings.


Re: Tags for ontap objects like shares, volumes, qtrees to improve documentation and researches

$
0
0

OnCommand Unified Manager supports Annotations, which might be helpful. You can define custom Annotation Rules to dynamically categorize storage objects. Once objects are associated, you can view in the Admin>Annotations section of OCUM all objects that match a specific annotation, whether associated manually or dynamically (you cannot, however, use custom Annotations as criteria for filtering searches for OCUM objects, FYI). 

 

https://docs.netapp.com/ocum-95/index.jsp?topic=%2Fcom.netapp.doc.onc-um-ag%2FGUID-3991EE2A-B938-49E5-A736-BC1BAD1664E6.html

To recreate one and only one aggr on same disks?

$
0
0

For some reasons, we need to recreate the one and only one data aggregate on several nodes. Aggregates have either root-data or root-data1-data2 configurations. We could have a shelf of disks for staging purpose, moving data from that data aggr to this staging aggr, and then destroy / recreate that data aggr, and finally move the data back.

My question: Since root aggr is using root partitions on multipel disks, how could I restore all partition configurations back to that data aggregate when I recreate the aggr and before I move data back? or, I have to manually create partitions and also ownership?

 

Or, What is your recommendation?

Re: To recreate one and only one aggr on same disks?

$
0
0

You’ll have to re-init the array to change ADP.    But are you just wanting to re create to create a fabric pool aggr for example.  

Re: How to move a snapmirror source volume to another cluster in CDOT

Re: To recreate one and only one aggr on same disks?

$
0
0

I am trying to avoid to init root aggr. By keeping root aggr and recreate data aggr, it would be less work.

Please explain your idea in details. Thanks!

Viewing all 19214 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>