Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19240 articles
Browse latest View live

Re: Usable Space FAS 2552 C-mode

$
0
0
You might have to use vol-move or remove the data from one of the aggr.. (eg:- node2-aggr)
offline and delete the aggr, that will bring all those 6 data disk on that node as spare disks.
 
Then you can re-assign those disk to other node.. before you add them to the aggr.. 
make sure you verify/modify the raid-group.
 
As long as you have enough room in node1-aggr to move the data from node2-aggr, you can do this process non-disruptively.
 
Robin.

Clustered Ontap LIF Status

$
0
0

A simple command line tool to watch the network activity on your Ontap Logical Interface.

 

https://github.com/robinpeters/cdot-lif-status

 

## Example:
```
$ ./lifstat.pl 
Option c required 
Option u required 
Option p required 
Usage :: 
  -c|--cluster : Cluster Name to get Logical Interface Status. 
  -n|--node       : Node Name [To limit the lif stats to node level]. 
  -u|--username   : Username to connect to cluster.       Example : -u admin 
  -p|--passwd     : Password for username.                Example : -p netapp123 
  -v|--vserver    : Vserver Name [To limit the lif stats to vserver level]. 
  -i|--interval   : The Interval in seconds between the stats. 
  -h|--help       : Print this Help and Exit! 
  -V|--version    : Print the Version of this Script and Exit!


$ ./lifstat.pl -c 10.10.10.10 -u username -p password 
Node             UUID        R-Data R-Err   R-Pkts       S-Data S-Err   S-Pkts  LIF-Name             
node-01          1065       1748060     0      187            0     0        0  nfs_lif3             
node-01          1024           976     0        5          564     0        5  node-01_clus1     
node-02          1014           564     0        5          976     0        5  node-02_clus2     
node-01          1049           492     0        4          392     0        4  node-01_clus3     
node-02          1051           392     0        4          492     0        4  node-02_clus3     
node-02          1066           220     0        1            0     0        0  nfs_lif4             
node-02          1013           128     0        2          128     0        2  node-02_clus1     
node-01          1023           128     0        2          128     0        2  node-01_clus2     
node-02          1064             0     0        0            0     0        0  nfs_lif4             
node-01          1045             0     0        0            0     0        0  iscsi_lif2           
node-01          1026             0     0        0            0     0        0  node-01_mgmt1     
node-02          1047             0     0        0            0     0        0  iscsi_lif4           
node-02          1033             0     0        0            0     0        0  DFNAS02DR_nfs_lif2   
node-01          1038             0     0        0            0     0        0  cifs_lif1        
node-01          1032             0     0        0            0     0        0  DFNAS02DR_nfs_lif1   
node-02          1058             0     0        0            0     0        0  smb3cifs_lif02       
node-01          1057             0     0        0            0     0        0  smb3cifs_lif01       
node-01          1063             0     0        0            0     0        0  nfs_lif3             
node-02          1056             0     0        0            0     0        0  node-02_nfs_lif_1 
node-01          1053             0     0        0            0     0        0  node-01_icl1      
node-01          1048             0     0        0            0     0        0  iscsi-mgmt       
node-02          1035             0     0        0            0     0        0  lif2         
node-02          1039             0     0        0            0     0        0  cifs_lif2        
node-01          1034             0     0        0            0     0        0  lif1         
node-02          1046             0     0        0            0     0        0  iscsi_lif3           
node-01          1025             0     0        0            0     0        0  cluster_mgmt         
node-02          1027             0     0        0            0     0        0  node-02_mgmt1     
node-02          1054             0     0        0            0     0        0  node-02_icl1      
node-01          1059             0     0        0            0     0        0  smb3cifs_mgmt        
node-01          1062             0     0        0            0     0        0  coecifs1             
node-01          1044             0     0        0            0     0        0  iscsi_lif1           
node-01          1052             0     0        0            0     0        0  svm2_lif1            
node-01          1050             0     0        0            0     0        0  svm1_lif1            
node-02          1055             0     0        0            0     0        0  nfs_lif_1 
^C
....
....

Flexclobe

$
0
0
Before the clone is split off, it should share same file with its parent volume, which means own same inodes, unless there are new added files after the creation of the clone.
My question: is there any way to tell how many inodes are shared and how many are owned by the clone itself?

Thank you!

Re: Flexclobe

Re: Flexclobe

$
0
0

 I waned to estimate how many inodes are owned by the inode itself, and how many by it's parent volume. It seems to me thse options should give you a estimated amout of space upon the inodes already processed  which is diffeent than what I am asking. can you please show me an example of using this command?

 

Re: cluster serial number unique?

$
0
0

Alex,

 

We have develeoped a system and were using "Cluster Serial Number" to identify the cluster. Later we realised that it is not uniqe across clusters. COuld you please share the steps to change the "Cluster Serial Number"?

 

Regards

Unnikrishnan

Re: OnTap 8.3.2 System Manager Not Working

$
0
0

Hello, 

 

Just a update to that ticket to give another solution. 

 

I had exactly the same problem (404 page not found) after i did a downgrade from ONTAP 9.1P2 to ONTAP 8.3.2P9. 

 

I notice that service name "sysmgr" was not enable and i simply run the following command : "

services web modify -name sysmgr -enabled true

"

 

Hope it will help someone one day. 

 

Cheers
Stéphane

Re: cluster serial number unique?

$
0
0
Hi Unnikrishnan,

To change the cluster serial number, obtain a proper cluster base license code for the system, either by looking up the sales order number for the first two nodes serial numbers on the support-licenses site, and then looking up licenses by sales order number, or by obtaining one through your NetApp account rep, and add it with "license add". Once added, remove the original cluster base license.

SnapVault secondary volume, inline compression and deduplication

$
0
0

Hi

 

FAS8020, 8.2.4P6, 7-mode

 

I've created my SnapVault destination volumes using the following method:

 

vol create sv_ca_testvol -s none aggr0 94g
snap sched sv_ca_testvol 0 0 0
snap reserve sv_ca_testvol 0
sis on /vol/sv_ca_testvol
sis config -C true -I true /vol/sv_ca_testvol
sis config -s manual /vol/sv_ca_testvol

 

tr-3958 says this:

 

The manual schedule is an option for SnapVault destinations only. By default if deduplication and postprocess compression are enabled on a SnapVault destination it will automatically be started after the transfer completes. By configuring the postprocess compression and deduplication schedule to manual it prevents deduplication metadata from being created and stops the postprocess compression and deduplication processes from running.

 

Why am I not seeing any inline compression or deduplication savings on the SnapVault destination volumes?

 

Thanks

ONTAP Recipes: Deploy a MongoDB test/dev environment on DP Secondary

$
0
0

ONTAP Recipes: Did you know you can…?

 

Deploy MondoDB test or development environment on Data Protection Secondary storage

 

Before you begin, you need: 

  • AFF/FAS system as primary storage
  • FAS system as secondary storage (unified replication for DR and vault)
  • Snapmirror license on both systems
  • Intercluster LIF on both systems
  • MongoDB Replica Set production system
  • Server to mount the clone database (cloning server)

 

  1. Set up the cluster peering between primary and secondary systems.
  2. Establish the SVM peering between the SVM that holds your MongoDB production database on the primary system with the SVM on the secondary system that will work as the vault system.
  3. Initialize the relationship to get your baseline snapshot in place.
  4. On the unified replication system, identify the volume(s) which contain the MongoDB replica set LUNs, identify the snapshot that reflects the version, and create a FlexClone based on that snapshot.
  5. Map the cloned LUNs to the cloning server. You don’t have to map LUNs from primary and all secondaries, just pick one (for example, primary member of the replica set) and map its LUNs to the cloning server.
  6. Mount the cloned LUNs filesystem.
  7. Create a MongoDB config file in the cloning server, like the one that already exists in the production environment, except that you will make the dbpath option pointing to the cloned LUNs filesystem and exclude the replication section of the config file.
  8. Connect to your cloned MongoDB database.

 

For more information, please see the ONTAP9 documentation center

 

How can I change the IP address of "ONTAP select Deploy(9.2RC)" DNS server?

$
0
0

I want to change the IP address of the DNS server that was set up when OntapSelectDeploy was constructed.

I checked the user guide and the command list of the real machine, but I could not find the change command.

 

 # DNS server information is written in the /etc/network/interface

 # I'll try to modify, but this file is READ ONLY!

    # edit /etc/network/interface

 

Could you tell me how to solve it?

 

cluster peer show - availability pending

$
0
0

I am adding nodes to a cluster  running ontap 9.1P2.   I added the nodes and then added intercluster interfaces on each of the new nodes,  there is a firwall rule update required so I removed the interfaces for now. 

 

 

I am not sure of the status before but now all cluster peer relationships avialability status is pending.  even after verifying that the cluster is replicating still and that the cluster peer address list is good. 

 

 

there are no offers so I can't do anything to change this status,   how can I make the cluster peer availability get back to Available? 

Architecture Solution - Question

$
0
0

One of my friends reached out to me to provide a solution for the below assignment. Can someone answer the questions ?

 

 

You have been tasked with Architecting Netapp Storage Solution for a new application environment. The environment consists of an Oracle database and CIFS shares for holding multimedia image files

 

  • The long term plan for this storage environment is to host multiple customer environments with the cluster growing across multiple FAS nodes in the future. Keep this in mind when planning this implementation, to take advantage of Netapp storage features and efficiencies.

 

  • You have 2 * FAS8080 heads
  • It has been decided that each server will only run a single protocol SAN or NAS.

 

Firstly, the oracle database will serve a heavily transactional application.

 

The database will be mounted on a Linux cluster (linux-1 & linux-2) with the following mount points.

 

/DB/data (Oracle datafiles) – 1024 GB

/DB/redo (Oracle online redo logs) – 100 GB

/DB/arch (Oracle archived redo logs) – 300 GB

 

As this is heavily transactional database, it is critical that the writes to the redo area have very low latency. Writes to the archive area are less critical in terms of latency, but the Dbas often request that /DB/arch grows several times in size when they have to keep many more archive logs online than usual. Therefore /DB/arch needs to be expandable to 1.5 TB when asked. After a day or so, they’ll delete the logs so you can reclaim the space. The data area must handle quite large IOPS rate.

 

To keep things simple, assume:

 

  • The storage will be mounted by 2 (Linux) hosts.
  • Standard Active/Passive Veritas clustering

 

Secondly the CIFS environment will require a 10 TB CIFS share along with a 40 TB share.

 

The 10 TB CIFS share will be used for initial storage of the image files while they are manipulated and analysed, so have a high performance low latency requirement. The 40 TB share will be used for long term storage, with storage efficiency and capacity of more importance than performance.

 

1) How many shelves would you buy of what type and why? 

2) How would you configure your physical environment and why?

ONTAP Recipes: Send ONTAP EMS messages to your syslog server

$
0
0

ONTAP Recipes: Did you know you can…?

 

Send ONTAP EMS messages to your syslog server


1. Create a syslog server destination for important events:


event notification destination create -name syslog-ems -syslog syslog-server-address


2. Configure the important events to forward notifications to the syslog server:


event notification create -filter-name important-events -destinations syslog-ems


The default “important-events” work well for most customers.

 

 

For more information, see the ONTAP 9 EMS Configuration Express Guide in theONTAP9 documentation center

 

 

Failed Disk Not Rebuilding / Prefail State Several Weeks

$
0
0

Greetings,

 

I have a Clustered system that I have recently taken over, for one of our customers, that is running on 8.2.2P1.

These are older 6240 systems.

 

The problem is with one of the aggregates not rebuilding a failed disk or performing the copy for the prefail.

(I am not positive on the pre-fail, still trying to determine how to vailidate that data is being copied)

 

1. The system appears to have adequte spares of the same type.

2. I noticed that the aggregate is failed-over to the partner node. 

3. Several Aggr scrubs are in the suspended state (one is the RG with a failed disk), but the two with prefailed disks are not currently scrubbing.

 

I have not taken any action at this point other than investigating.

 

Questions:

1. Could the scrub process be causing this?

2. Would the aggregate not being on the home node be an issue? (thought I've seen rebuilds in this scenario)

 

I appreciate any help as I have never seen this before. Disk rebuilding are one of those Netapp things that just happen!

 

 

 

Thanks in advance!

 

Ken

 

 


ONTAP Recipes: Send ONTAP EMS messages to your syslog server

$
0
0

ONTAP Recipes: Did you know you can…?

 

Send ONTAP EMS messages to your syslog server

 

  1. Create a syslog server destination for important events:

event notification destination create -name syslog-ems -syslogsyslog-server-address

 

  1. Configure the important events to forward notifications to the syslog server:

event notification create -filter-name important-events -destinations syslog-ems

 

The default “important-events” work well for most customers.

 

 

For more information, see the ONTAP 9 EMS Configuration Express Guide in the ONTAP 9 Documentation Center.

LUN full

$
0
0

Hello,

 

A thick LUN has filled up, and this caused it to go offline.  It's a VM VHD file LUN.  We managed to brink it back online and delete some data from the LUN from within Window, but it keeps going offline due to lack of space.

 

How can I make it aware that we've just deleted 80GB of data so it doesn't keep going offline?  We've just upgraded our hosts to Server 2012 R2 and we haven't yet installed SnapDrive and apparently we're not authorised to download it from the NetApp site now.

 

Can we do the space reclamation thing without SnapDrive?

 

We have FAS2020 with 7.3.2 OS.

 

Thanks!

Re: LUN full

$
0
0

I have managed to rejig volume space to allow me to increase the LUN so it now seems to stay online.

 

If I shrink the partition in Windows, then shrink the LUN, then increase the LUN back and grow the partition in Windows back, will that make the NetApp realise the data is not really 700GB, but actually 620GB.

Re: LUN full

$
0
0

Give it a try may be ? I have never tried something on THICK LUN before. 

Re: ONTAP Select : Cannot create

$
0
0

Is it a standard or distributed vSwitch configured at ESXi level? If distributed, then there is a bug which prevents the deployment and fails at this same level. The workaround is to create a dummy VLAN port group in the standard vSwitch without any nics or any other port groups, and then try the deployment.

 

If standard then perhaps there is some issue which needs to be reviewed.

Viewing all 19240 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>