Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19111 articles
Browse latest View live

Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

We are in the process of (finally!) decommissioning our FAS6220 HA pairs.  Auditing is enabled so I have to address the auditing staging (MDV) volumes.  I've seen a number of suggestions across Google search results but wanted to present this to the community for a solution rooted in NetApp best practice.

 

I will be affecting nine (9) aggregates across four (4) FAS6220 nodes in this procedure.  I am currently vacating the aggregates of data and root volumes but suspect I will not be able to delete the aggregates as long as there are staging volumes present.

 

As always, thank you all for any constructive feedback you may be able to provide!

 

 


Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

Based on TR-4189, page 7 in particular, I would say that the proper way to clean up those MDV volumes would be to delete the audit policy for all SVMs, which then deletes the audit staging volumes. 

 

Steps 1 and 2 of this KB article also describe the process. 

 

Hope that helps!

 

 

Donny

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

I would never "shoot the messenger"....  but that was the ONE solution that I was hoping could be avoided.  This means auditing would NOT be active for the period of time it takes to delete policy/delete aggregate/create policy.  Is this assumption correct?  Thank you for your quick reply, Donny!

 

Greg

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

Yes, I would agree with that assumption. 

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

Very well... I will need to plan accordingly.  Thank you!

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

Could not ever find any official documention on this. I really wish NetApp would publish something on this topic, some KB or something!!!

 

What I have noticed/done in the past is simpley delete the aggregate. As you know MDV volumes are placed on every aggregate when auditing is enabled. Even a new aggregate after the fact gets the MDV volumes added. I am believeing that once a volume is migrated off an aggregate, the MDV on the prior aggregate is no longer used on only updated on the current aggregate. I also suspect that it may not necessarily move auditing information during the volume move. Hence, delteing the aggregate may in fact remove some historical audting information.

 

With that said, I cannot recall, but it did require using diag or at least advanced mode. I was able to forcably delte the MDV volumes on the aggregate I was removing. After that, the aggregate removd easily.

 

Of course, I may have had to use the force method on the aggregate as I could not remove the MDV. Again, I do not recall completely.

 

Either way, I was able to use the "-force" flag and either : delete the MDV and the offline/delete the AGGR, or: using the "-force flag" I was able to offline/delete the aggregate. I suspect it was the first.

 

Again: IF SUPPORT IS WATCHING: You should really come up with a KB article documenting how to properly remove a MDV volume from an aggregate that is being decommissioned and that process should not even come close to including: turn off auditing!

 

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

I'm from Support. Is this for CIFS auditing or just regular ONTAP auditing? Sorry just trying to get clarification. Great suggestion though.

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

$
0
0

This is usually cifs/nfs auditing. The  mdv volumes are used to store event information related to auditing


Re: ONTAP RBAC issue with offline to only clones

Re: Best Way to Delete "MDV" volumes for Node Decommissioning

FAS2720 & AFF220 4 Node cluster

$
0
0
Hi,
We are having an issue where we have following setup:
- we have 4 nodes cluster with different IPs say X.X.X.1 to X.X.X.4
- we have 1 Cluster management LIF x.x.x.5
 
Now the issue is that when we unplug the management cable from X.X.X.1 the other node do not take over automatically the cluster management LIF IP (cluster management IP not accessible) and we have to manually shift it to other node. Please guide us what we are missing in this situation.   
Please find the attached Screenshot.
Thanks.
 

Re: FAS2720 & AFF220 4 Node cluster

$
0
0

Hi,

 

What is the output of this cmd:

::> network interface show -failover -lif cluster_mgmt


Also: Could you eloborate on 'we have to manually shift it to other node' : When you unplug cable, do you lose connection to ssh ? if so, how do you manually shift it to other node ?

 

Thanks!

Re: FAS2720 & AFF220 4 Node cluster

$
0
0

output result attached

::> network interface show -failover -lif cluster_mgmt

Re: FAS2720 & AFF220 4 Node cluster

$
0
0

output result attached

::> network interface show -failover -lif cluster_mgmt

Re: FAS2720 & AFF220 4 Node cluster

$
0
0

From the screenshot : It looks perfect and standard. cluster_mgmt is available to failover to ports from all nodes in the failover group (node mgmt & data ports). Failover-group & policy is standard as it should be.

 

In the screenshot, I see that failover targets presented are in this order:
1) If Cluster_1:e0M is down, we can simulate it.

::>set adv
::*> network port modify -node cluster_1 -port e0M -up-admin false

it should failover to e0c, if that's unavailable then to e0d and so on until e0f, and then it will switch to different Node i.e cluster_2:e0M if none of the ports on cluster_1 is available.

 

You can test it out and let us know.

 

::*> network interface show -role cluster-mgmt

This should show the current-node/port of the failvoer-target and is-home should say  'false'.


Re: FAS2720 & AFF220 4 Node cluster

$
0
0

This is a standard setup issue. The ONLY ports that should be in that list (in other words, based on the output, in the Default Broadcast-Domain) are connected ports on the same physical network.

 

Different customers have different setups. With that, at a minimum, the broadcast-domain should include e0M from each node. *IF* you have e0c/e0d/e0e/e0f connected and are on the same physical network (whatever networ e0M is on, like 192.168.1.1 - 192.168.1.4) then it will work. If it is not, then it is entirely possible when the port fails (or the plug is pulled) it will go to another port and advertise there (Reverse ARP) that the IP address has moved.

 

I have seen this event transpire before and the cluster became unavailbel through the cluster_mgmt port.

 

Please correct your Broadcast-domain(s) and try again.

 

Typical Broadcast domains seperate things out. For example:

 

Default (MTU 1500):

node1:e0M

node2:e0M

node3:e0M

node4:e0M

 

NFS (MTU 900):

node1:a0a-101

node2:a0a-101

node3:a0a-101

node4:a0a-101

 

CIFS (MTU 1500):

node1:a0a-201

node2:a0a-201

node3:a0a-201

node4:a0a-201

 

Provide more detais if this does not work.

 

Suggestions include:

"broadcast-domain show ; ifgrp show"

Also

"net int show -failover"

 

(but please try to copy/paste if you can instead of a "picture". I know, some places cannot, but if you can, it is easier!

 

Re: netapp cifs share aliases

$
0
0

Or you could go a little more simple:

1. setup DDNS and let DNS round-robin between the interfaces (With most modern DDNS instances, you may need to tell ONTAP to use Secure DDNS for it to actually work...and make sure you have both forward and reverse lookup zones)

2. setup ONTAP - ON-BOX - DNS load balancing.

https://kb.netapp.com/support/s/article/ka31A00000012CGQAY/how-to-set-up-dns-load-balancing-in-clustered-data-ontap

https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-nmg/GUID-2A6B1345-0C1D-4E3D-B01B-ED724A69D376.html

 

 

Re: Unable to connect to ONTAP cluster from zExplore Developmet Interface

$
0
0

What "user" are you trying to connet with? That user must have an "application" of "ontapi" to connect.

Re: secd.conn.auth.failure:

$
0
0

We are on 9.3P16 and are seeing secd.lsa.noServers: None of the LSA servers configured for Vserver xxxx are currently accessible via the network

 

Also, we see lots of these too:

 

secd.ldap.noServers: None of the LDAP servrs configured for Vserver xxxx are currently accessible via the network

 

There are no network issues, nor are we using LDAP. This has been occuring for some time now. Any ideas solutions?

 

 

Issue with aggr creation

$
0
0

hi there 

we have NetApp Release 8.1.4p1 7-Mode.

i am trying to create an aggragte of 32 disks of type FCAL and getting an error with dula parity

 

aggr create aggr_name -t raid_dp -d oc.32 1a.39...........up to 32 disks and get following error

aggr create: Neither a count of disks nor a list of disks for the new aggregate was specfied.

 

 

Viewing all 19111 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>