Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19181 articles
Browse latest View live

Re: Filer-initiated network connections egressing on cluster_mgmt lif

$
0
0

What happens if you move cluster management LIF to another node?

 

My guess is that simply takes first interface on network with (default) gateway (where "first" is in some internal kernel order of creation). 


Re: Filer-initiated network connections egressing on cluster_mgmt lif

$
0
0

That's odd...

 

I know with changes in 9.2+, we removed part of the network stack to optimize it, but that also removed IP fastpath. As far as I know it is supposed to go out the node management LIF, not cluster LIF.

 

I mean the simple solution would just be to add a route to your proxy out the node management LIF, or modify the routing table. Or you could modify your proxy to allow the cluster management LIF.

 

If you really want a detailed dive, if you have support entitlements I'd suggest opening a case. I couldn't pull up the ASUPs searching for that node name so we'll need to probably pull logs and see what is going on.

Re: Filer-initiated network connections egressing on cluster_mgmt lif

$
0
0

Expected behavior. Management traffic is allowed to go out any node or cluster-mgt interface. Your ACLs should include every node management, cluster management and service-processors (SP) or baseboard management controllers (BMC) IP addresses. 

Re: Filer-initiated network connections egressing on cluster_mgmt lif

$
0
0

Thanks for looking, y'all.

 

To aborzenkov:

  • cluster_mgmt lif on node 1: traffic coming out .70 (the cluster_mgmt lif) and .72 (node2).
  • cluster_mgmt lif on node 2: traffic coming out .71 (node1) and .70 (the cluster_mgmt lif).

To paul_stejskal:

We did add the cluster_mgmt lif to the proxy ACLs as a workaround, because we needed ASUPs to fly for a case (side note, I can't believe burt 1156898 is not getting fixed in 9.3).  This question was mostly to determine whether my proxy edit was a 'temp workaround for a misconfigured filer' or if this was an intentional change in ONTAP and my proxy change needed to be made permanent.

 

I assume y'all can talk internally and reach a consensus, but I'm going to assume here that TMAC_CTG's answer is correct vs paul_stejskal's 'huh that's weird'  (sorry!).  I wish I had a cite or I had spotted this in some kind of changelog, but, oh well, I'm happy with someone telling me it's expected.

 

Thanks for the replies.

Re: System Manager SAML with domain groups

$
0
0

thanks for the answer.

this is what i've done. is there any plan to add domain groups support anywhere soon?

Re: /mroot/etc/log no access

$
0
0

For clusters with no access to a webserver, but that do have access to a scp program like winscp:


Prior to this make sure that the diag user is unlocked and a password is set:

clustername::*> security login show -username diag

Vserver: vserver1
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
diag console password admin no none

clustername::> security login password -username diag

Enter a new password:
Enter it again:


Login to each node's mgt IP.
Set diag
Drop to systemshell:
clustername::*> systemshell local
(system node systemshell)
diag@127.0.0.1's password:

clustername-02% sudo kenv bootarg.login.allowdiag=true <-- hit enter
bootarg.login.allowdiag="true" <-- returns this

You can do this from the original cluster mode IP, but the other node will need to be from the opposite nodes mgt IP because the software location is node specific.

Using WinScp, login to the node mgt IP using SCP, port 22, user diag, diag password.

Navigate to /mroot/etc/software/
Copy over SP_FW.zip (rename what you downloaded--or whatever software)

clustername-02% sudo kenv bootarg.login.allowdiag=false
bootarg.login.allowdiag="false"

Disconnect from WinScp

Re: newly added shelf is missing in "storage shelf show" output

$
0
0

hi, were you able to resolve this issue ? It would appear I may have the same issue. tks Carl

 

Re: newly added shelf is missing in "storage shelf show" output

$
0
0

Carl,

What is the output of the following commands?

 

node run -node <node_name> sasadmin expander_map
node run -node <node_name> sasadmin shelf 'adapter_name'  #adapter_name=sas_port loop is plugged into.
node run -node <node_name> sasadmin expander_map 'adapter_name'

node run -node <node_name> disk show -n


Creating new aggregate on a exisiting FAS2520 with ontap 8

$
0
0

Hi,

 

I have a FAS2520 and DS2246 running on ontap 8 , 12 disk are there in Ds2246 and 12 were free. We got additional 12 disk inorder to increae the storage  and i have added all dsbay 12-23.we have two nodes and i need to assign new harddisk to node1.

1. I gone through multiple vedios,documents took advise from previos person who already  worked on these system all says to add hardisc through command mode. when i cheked through command mode few new disks are in un assigned state few are in spare statur. i traied to change the spare status and unassigned its not working. 

Please let me know how i can create new aggregate with these harddisc throuh command line interface, do i need to change the disk status from spare to any other. can I create aggregate from GUI directly?

2. The requirement is create a new aggrigate with full capacity . when i am creating aggrigate i found that i can only use 5 disk to a aggrigate. can we increase that  with out upgrading to ontap 9? please let me knwo how.

 

Re: Creating new aggregate on a exisiting FAS2520 with ontap 8

$
0
0

In order to create a new aggr the drives need to be owned by either node 1 or 2  and as type "spare"  

 

Since your're getting 5 disk aggr (3 data disk + 2 paritity disks) plus 1 for spare,  it sounds like the system split the ownership of the 12 new drives automatically between the two nodes.   

 

what's your current aggr config?  Just a single aggr across the cluster on node2?    

 

Upgrading to ONTAP 9.1 or higher  wouldn't really matter much in this case,  though I recomend upgrading to a supported version.   

 

aggr create:

 https://library.netapp.com/ecmdocs/ECMP1610202/html/storage/aggregate/create.html 

disk ownership and other info: 

 

https://library.netapp.com/ecmdocs/ECMP1610202/html/storage/disk/toc.html

 

 

 

 

 

Re: Creating new aggregate on a exisiting FAS2520 with ontap 8

$
0
0

what's your current aggr config?  Just a single aggr across the cluster on node2?    

 

Each node ahs two aggr currently  I am trying to create 5th in node1. When I check disk details its shows few are in unassigned state and few are in spare is that is correct.Disk details.jpg

Re: Creating new aggregate on a exisiting FAS2520 with ontap 8

$
0
0

Were these new disks or are they being reused from some other system? 

What version of Ontap is this,  8.? 

 

Looks like 22 is broken, as is 23 (this one is kinda odd too as it's showing up twice?)  and a few disks are unowned.    

 

After you fix those issues,  unowned and broken, you'll be able to create a new 11 disk aggr  or even add to an existing aggr.    

 

 

Downgrade or revert new AFF A300 from 9.6 to 9.5

$
0
0

We have 2 new AFF A300 system that came preinstalled with OnTap 9.6p2.  Our exisiting clusters of FAS8040 are running 9.5p5.  We'd like to downgrade A300s to Ontap 9.5.  What are proper procedures for downgrading nodes?  Our A300s are connnected to cluster switches (not part of cluster) but should we connect A300 nodes together when downgrading OnTap?  Our goal is to downgrade A300s to OnTap 9.5 then initalize with ADPv2.  

Re: Downgrade or revert new AFF A300 from 9.6 to 9.5

$
0
0

If these are new systems, it is faster to simply install another version. Use special boot menu 9a to remove existing partitions, then 7 to install desired version (needs HTTP server to download ONTAP from) and then 4 or 9b to initialize. After that join nodes to existing cluster.

Issue with SSH to nas Vserver

$
0
0

Hey,

I have a cluster consisting of 4 nodes, two "long existing" FAS9000 and 2 recently added A200.

All nodes are running 9.5P8.

Now I have a NAS vserver providing CIFS & NFS services.

The vserver has 1 mgmt ip with data role and none data-protocols, and 4 data ips (one on each node) with cifs and nfs data-protocols and data firewall policy.

the issue is that the ips that reside on nodes 1&2 are reachable through ssh although ssh is not permitted in the firewall policy.

lifes 3&4 that are newer and reside on the A200 (created after 1&2) are not reachable through ssh and the configuration for all the lifs seem to be identical.

I tried to bring lifs 1&2 down for a few seconds and then up and also change their firewall policy to mgmt and then back to data but it didn't help.

Does anyone have an idea why this might happen and how to resolve this?

Thanks!


Re: Issue with SSH to nas Vserver

$
0
0

Hi,

 

Could you share this output:

::> system services firewall policy show
::> network interface show -fields firewall-policy,lif,address

 

Note:firewall-policy 'mgmt' applies to both node-mgmt & cluster-mgmt.

Re: newly added shelf is missing in "storage shelf show" output

$
0
0

Hi

thanks for your response . I have a case open for this issue and I am working with TS to reslove it.

I can see both shelves, sas_port  loops  and all the disks using these commands. 

tks for you help.

Carl

How to identify Cold/Archive/Infrequent data from Netapp Volumes on FAS systems?

$
0
0

Hi,

 

I've quite a few FAS systems, mostly running HDD aggregates & some on SSD aggregates. These systems have been in use for ages & likely to have orphan or less used data on the volumes.

 

I'd like to find a way to scan volume data & identify files/directories that are infrequently used or cold? Is there a tool/script to do this on Netapp FAS systems?

 

I understand Fabric pool has a feature " Inactive data reporting" that can be leveraged however I DON'T use fabric pools. 

 

I do have Netapp OCI installed. Can it be used for above use case? Is yes, please advise how?

 

Additionally, can Netapp's XCP tool be used for a such a task? I am also open to explore 3rd party tools - preferably open source.

 

I'd like to hear your views on how to go about achieving this task?

 

Regards,

Ashwin

Re: How to identify Cold/Archive/Infrequent data from Netapp Volumes on FAS systems?

$
0
0

A script something like what is described in this blog post could potentially fit the bill - it'll just run recursively on a given share/directory and give you the list of files that fit your definition of "cold/archive/infrequent" and also the total size of all of the files that fit the criteria. 

 

As far as I know, inactive data reporting will only identify the total amount of data that is considered inactive, not necessarily the files themselves. 

Re: Issue with SSH to nas Vserver

$
0
0

Thanks for the help.

 

system services firewall policy show:

 

data:

dns         0.0.0.0/0

ndmp    0.0.0.0/0

ndmps  0.0.0.0/0

portmap 0.0.0.0/0

 

mgmt:

dns             0.0.0.0/0

http            0.0.0.0/0

https          0.0.0.0/0

ndmp        0.0.0.0/0

ndmps      0.0.0.0/0

ntp              0.0.0.0/0

portmap   0.0.0.0/0

snmp          0.0.0.0/0

ssh               0.0.0.0/0

 

all under allowed tab

 

net int show -fields firewall-policy,lif,address

 

vserver   lif         address    firewall-policy

-------------------------------------------------------

vs-nas    mgmt     x.x.x.a    mgmt

vs-nas    nas1       x.x.x.b     data

vs-nas    nas2       x.x.x.c     data

vs-nas    nas3       x.x.x.d     data

vs-nas    nas4       x.x.x.e     data

Viewing all 19181 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>