Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19111 articles
Browse latest View live

Re: Volume move between aggregates


Re: ONTAP 9.5 does not support ifgrp favor command ?

$
0
0

The following is result of command [ifgrp] for ONTAP9.5.

 

-------------------------------------------------------------------------------------------

Cluster01::*> system node run -node Node01 -command ifgrp favor
ifgrp: Did not recognize option "favor".
Usage:
ifgrp timer <lacp_ifgrp_name> <short|long>

Cluster01::*>

-------------------------------------------------------------------------------------------

Cluster01::*> system node run -node CIECWDDS01
Type 'exit' or 'Ctrl-D' to return to the CLI
Node1>
Node1>
Node1> priv set diag
Warning: These diagnostic commands are for use by NetApp
personnel only.
Node1*>
Node1*> ifgrp
Usage:
ifgrp timer <lacp_ifgrp_name> <short|long>
Node1*>

-------------------------------------------------------------------------------------------

 

There is only [ifgrp timer] ...

 

Automatic node referrals CIFS and support for Hyper-V

$
0
0

Hi,

What is correct info regarding Automatic node referrals during a Hyper-V setup over SMB?

I have found a document from netapp, published 09/09/2018: https://kb.netapp.com/app/answers/answer_view/a_id/1030128/~/how-to-set-up-svm%2Fcifs-for-hyper-v-over-smb-

Under chapter: General SVM/CIFS Server and Share Configuration Requirements, chpater 12.3

it says Automatic node referrals must be disabled for Data ONTAP versions 8.2.0 and earlier. Automatic node referrals are supported in Data ONTAP 8.2.1 and later.

 

I just wonder if that is correct.. that is the only information that says it is OK to use on CIFS -share to Hyper-V (If you have above 8.2.0)

 

All other artical says that it is not supported.
(Example "SMB/CIFS Config guide for Hyper-V/SQL, from Netapp, published feb 2019" https://library.netapp.com/ecm/ecm_get_file/ECMLP2494083 on Page 20: • Automatic node referrals must be disabled)

My problem today is that one of my nodes in my HA cluster (ontap 9.4) owns all SMB-sessions.
I do not know why the SVM-server forward all SMB traffic to one of the node.
I have two SMB LIF´s, one to each node in the cluster. The DNS (external) is setup an both LIF´s have its own IP address registered in the DNS.
When ping from my hyper-V hosts the DNS roundrobin works fine. But it does not matter, when the SVM-server keeps sending all traffic to the same node.
If I do a simple filecopy from my hyper-v server to the CIFS-share by using IP-address, then the traffic goes to the correct node.

That is why I want to use Automatic node referrals, to split the load equal over my nodes in the cluster.

 

Any tips?

Best regards, Pelle Dahlkild

_______________________________
2650 dual node HA cluster, switchless, ontap 9.4
SMB3

3-node hyper-V cluster (Win 2016)

 

Netapp Harvest Volume graph drop down contains erroneous data

$
0
0

After an ONTAP upgrade to 9.1 from 8.3 the Harvest Node graph drop down to select node connections erroneous data..

harvest_node.JPGdrop down list should just contain node names

Re: Netapp Harvest Volume graph drop down contains erroneous data

$
0
0

Hi 

 

I have the same situation, after upgrade ONTAP from 8.3.2 to 9.1

And I manually delete the erroneous data in /opt/graphite/storage/whisper/, erveything is fine, the erroneous data wil not show again.

Re: Netapp Harvest Volume graph drop down contains erroneous data

$
0
0

Hi..thanks for the feedback..i will look at that.

 

 

Sanitize entire netapp cluster

$
0
0

All the discussions/questions I have found related to disk sanitize have helped but thought I would post a new question to see if anyone has over the years done the process and will share with updated results.

 

I have a remote customer datacenter that is being shutdown.  They have a six node cluster with a mixture of SSD and NL-SAS <7200 rpm> disks.   We are working out a schedule when everything can be destroyed and the drives sanitized.  Customer wants to ensure all data is securely wiped, not to DOD standards necessarily but better than just zeroing the drives.

 

Drives are 3.5TB SSD and 3.7TB NL-SAS

DOT 9.x

 

Thanks for any update information shared!

 

tekievb

Re: Sanitize entire netapp cluster

$
0
0

For a non-DoD,  I would just delete all data aggrs,  zero out all the disks.   decomm the cluster and re-init all HA pairs.   

 

I don't think there is a middle ground that I can think of though.   Could zero them out a few times maybe?   


Re: Sanitize entire netapp cluster

$
0
0

SpindleNinja, Thank you for taking the time to respond.

 

The customer is requesting that all drives be sanitized, so just doing the zeroing of the drives is not an option.  Sorry I did not include that in my initial post.

Re: Sanitize entire netapp cluster

Re: Sanitize entire netapp cluster

$
0
0

The link you sent does not work correctly.  The link you shared does it provide estimated times that the process takes? 

 

I was hoping NetApp has internal documentation that would tell me how long the sanitization would take, that would be shareable.

Re: Sanitize entire netapp cluster

$
0
0

Also in the documentation it states that some ontap commands are no longer valid once the sanitize option is enabled.  Is there a list stating what commands are affected?

 

See below:

When disk sanitization is enabled, it disables some ONTAP commands. After disk sanitization is enabled on a node, it cannot be disabled

Re: Sanitize entire netapp cluster

$
0
0

Speeds can vary,  it's treated like a background process so if you have nothing running on the cluster, it should go pretty fast.  The larger SATA drives will be slower than SAS/SSD.  

 

From what I can find these are the commands that get disabled once it gets enabled.    

  • dd (to copy blocks of data)
  • dumpblock (to print dumps of disk blocks)
  • setflag wafl_metadata_visible (to allow access to internal WAFL files)

Couple other links for you: 

https://library.netapp.com/ecmdocs/ECMP12458210/html/GUID-BE1AF56B-40DD-4C42-99D6-76EEC9225DC5.html 

 

https://kb.netapp.com/app/answers/answer_view/a_id/1028718

usable space

$
0
0

Hi

 

I have aff a300 controller

1x ds224c shelf 24x3.8tb ssd (ssd x358a)

can anybody tell me what would be the usable space for it ?
and how do i calculate it?

 

Thanks.

Re: Cluster setup wizard via console

$
0
0

Thank you guys spindle ninja and Gidi.

 you have been a huge help.

thanks.


Re: usable space

$
0
0

Fusion is telling me ~65.02 TiB usable.   

 

To manually calulate it  you'll need to understand stand the way that ADP is configured on the disks. 

Read more about root-data and root-data-data here: https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-concepts%2FGUID-B745CFA8-2C4C-47F1-A984-B95D3EBCAAB4.html 

 

 With this base config, you would end up with 4 total aggrs.  2 root aggrs and 2 data aggrs.     

Each disk will have 3 total paritions.  1 root and 2 data.    root partition is ~53.88 GiB  each data parition is ~1.72 TiB.   

 

Re: Cluster setup wizard via console

$
0
0

HI Spindleninja,

 

As you mentioned You will need to cable every external shelf that shipped with your system.
This will help in cableing the SAS shelves and various other parts of the initial setup. 

My question is about SAS shelves, can i use Ds2246/ SAS drives with AFF A300 controller ? Because last time i remember you told me AFF A300 controller is ssd only?

Looking forward to your answer.

Thanks.

Re: Cluster setup wizard via console

$
0
0

AFF won't recognize spinning drives.     That said, according to hwu.netapp.com yes you can use the 2246 with the IOM6 or 12 shelf with various SSD drives only, not SAS.      I would put these on a separate stack than any 224c shelves you have. 

 

The non-AFF equivalent of the A300 is the FAS8200 and you can use spinning drives on that model. 

 

Edit: to add,  I think the term SAS where you might have been confused in the other posts.  SAS is used loosely here,  SAS can mean a SAS shelf as they have SAS ports,  or a SAS disk.  my apologies.  

Re: Unwanted system messages every day

$
0
0

We have found the solution: The messages are send over the 'events'. So we deactivate it and it works.

Ontap 9.5 P1 Cluster Links both down during/after upgrade

$
0
0

Hi,

 

i tried to update the AFF220 Cluster from 9.4 P3 to 9.5P1

 

1st node went fine, 2nd node then stuck. Got an Email which said Automatic NDU paused.

Now i see both links down:

e0a Cluster Cluster down 9000 1000/- - false
e0b Cluster Cluster down 9000 1000/- - false

 

they dont see each other anymore. Cables are fine, they worked before and noone touched them. i can access both BMC and both nodes but it says :

 

3/27/2019 14:13:02 aff220-01 ALERT callhome.andu.pausederr: subject="AUTOMATED NDU PAUSED", epoch="9fb37de9-7eae-497e-8a65-e2a1132d88b0"
3/27/2019 14:12:02 aff220-01 ALERT callhome.andu.pausederr: subject="AUTOMATED NDU PAUSED", epoch="60d38721-a585-42c5-83a5-bba67f05ddb9"
3/27/2019 14:11:46 aff220-01 ERROR net.ifgrp.lacp.link.inactive: ifgrp a0a, port e0d has transitioned to an inactive state. The interface group is in a degraded state.
3/27/2019 14:11:43 aff220-01 ERROR net.ifgrp.lacp.link.inactive: ifgrp a0a, port e0c has transitioned to an inactive state. The interface group is in a degraded state.

 

How do i get it active ? Cables are fine. 

 

-----------------
aff220-01
Partner: aff220-02
Hwassist Enabled: true
Hwassist IP: 10.0.220.111
Hwassist Port: 4444
Monitor Status: active
Inactive Reason: -
Corrective Action: -
Keep-Alive Status: healthy

Warning: Unable to list entries on node aff220-02. RPC: Couldn't make
connection [from mgwd on node "aff220-01" (VSID: -1) to mgwd at
169.254.23.217]

 

aff220::storage failover hwassist*> show
Node
-----------------

Warning: Unable to list entries on node aff220-01. RPC: Couldn't make
connection [from mgwd on node "aff220-02" (VSID: -1) to mgwd at
169.254.1.118]

aff220-02
Partner: aff220-01
Hwassist Enabled: true
Hwassist IP: 10.0.220.112
Hwassist Port: 4444
Monitor Status: active
Inactive Reason: -
Corrective Action: -
Keep-Alive Status: healthy

 

Viewing all 19111 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>