Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19238 articles
Browse latest View live

Re: Is data from destroyed volume destroyed.

$
0
0

While you delete a volume on OnTap 8.3 and above it will go to VRQ (Volume Recovery Queue)

Deleted volumes will be retained in a recovery queue for at least 12 hours before being completely destroyed. 

This feature is added on OnTap 8.3 to provides recovery capability for accidentally deleted flexible volumes

 

To permanently delete the volume without waiting the default 12 hours, purge the volume use follwing command.(in diag level)

 

::*> volume recovery-queue purge -vserver svm_name -volume vol_name

Before you delete the volume, its possible to rehost it to another SVM (in OnTap 9)

Using the volume rehost command.

 

Once you delete the volume from any SVM, there is *no way* you can access the data in another SVM.

Even though OnTap will not technically erase each block, it will marked them as empty block. 

which mean new data can be written there. 

 

If your customer is more conserned about data security, you can suggest using Ontap9 and make use of NVE (NetApp Volume Encryption)

 

You can find some info related to NVE here

 

thanks,

Robin.


Re: Is data from destroyed volume destroyed.

$
0
0

Hello,

 

Thank you for answering.  I had the info on what you answered, but my question is left unanswered.  

 

What happens when the second SVM reads a block that previously belonged to the first SVM ?  Let's say its block 32 to the server using the LUN.   What will block 32 contain when the server asks to read that block ?  Will it contain all zeroes or will it contain data placed there previously when the block was part of a volume of the first SVM ?  And how about if the new volume is a cifs volume ?

 

Here is my scenario.  I already have a tenant using the equipment.  I need, to introduce a second tenant to that same equipment because this is where I have the capacity.  None of the volumes of the first tenant are encrypted as previously the equipment was dedicated.  The first tenant has concerns on the privacy of his data if I introduce the second tenant.  

 

Re: SLAG (Storage Level Access Guard) & ONTAP 9

Re: NFS mount issue in C-mode

$
0
0

Can you show me how you did the export policy on vol0?

 

Thanks

Host Utilities on an RDM presented Windows 2012 cluster box

$
0
0

Simple.... A pair of Windows 2012 with clustering turned on has a LUN presented via vmware RDM (ram device mapping). I know VSC is required for ESX. Should HOST UTILITIES be installed. Whether needed??? or good for future diagnostics? Official stance?
     *** Running CDOT 8.3.1... ESX 5.5 with LUNs presented via FCoE...
The Install Doc states: "enable you to connect host computers to virtual disks (LUNs) on NetApp storage systems"... but then goes on to talk about HBA's which don't exists in my case [at least not at the guest level].

Re: Is data from destroyed volume destroyed.

$
0
0

Remember that the volume presented to the client/s is virtualised by ONTAP's WAFL layer to actual blocks on disk. The system will return exactly what clients have written to it - if they haven't written anything, it will return blank blocks. 

 

Your client's data will remain on disk until the block is reclaimed and rewritten for another volume. There are ways to recover it in whole or part until this occurs. If they are very concerned about security, you could consider an upgrade to ONTAP 9.1 and enable NetApp Volume Encryption (NVE) for their volumes (if your controller supports it). This will introduce software encryption for their volumes.

Usable Space FAS 2552 C-mode

$
0
0

Hi,

 

 

we have newly install NetApp FAS-2552 2 node cluster with 12 Disk 1.2TB HDD, OS Ontap 9.1 C-mode, what is a best practice to get maximum space.

 

currently assgin 6 disk each node, per node i get 2.56TB usable space, how can i get meximum space

 

 

 

need help

Latency issue with Windows Failover Cluster role failover and FAS2240-2/Data ONTAP 8.1.4 7-Mode

$
0
0

Greetings,

 

I've found an issue involving our specific filer model and ONTAP version (FAS2240-2/ONTAP 8.1.4 7
-Mode) with a new implementation that we're testing, and I'm hoping that someone could provide
some thoughts. When using LUNs created on this filer in this implementation, manually failing over
a file server role in a two-node Server 2016 Windows Failover Cluster using in-guest iSCSI
consistently takes around 12 minutes for the failover between WFC nodes to complete. During the
failover between these WFC nodes, a LUN reset request is sent by the MS iSCSI initiator to our
filer, and the connection to the disk is reestablished within the Windows environment.

 

I have tested the same configuration on an old filer/ONTAP version (FAS2020/ONTAP 7.3.5.1) and we
do not experience the 12 minute failover time. The failover of the file server role happens within
seconds, as expected. The only part of the configuration that changes to reproduce the long
failover time is which filer the Windows source and destination disks are hosted on.

 

The implementation is a newer Microsoft block-level replication technology called Storage Replica.
Our configuration involves two Windows Server 2016 DCE nodes in a Windows Failover Cluster, with
each node using the in-guest MS iSCSI initiator and SnapDrive 7.1.4 x64. Each node is connected to
one separate LUN for data (2TB) and one separate LUN for logging (25GB), making four LUNs total,
each thin-provisioned with SnapDrive. The four disks are then added to the Windows Failover
Cluster and a File Server role is created using one of the 2TB disks as the source disk.
Replication is then successfully enabled between the identically-sized disks using the Storage
Replica wizard, to create a source and destination for replication. The role is supposed to
failover to the other node (destination) within seconds, but this operation takes around 12
minutes on our specific filer and ONTAP version. As stated previously, the long failover does not
happen on an older filer, with an older ONTAP version.

 

We have a total of four FAS2240-2 filers, and each pair are in a HA configuration and reside at
different physical sites. I have tested hosting the storage in this configuration across the
physical sites and have also isolated the configuration to each individual site, and consistenly
achieve the same long failover time of the file server role with the FAS2240-2/ONTAP 8.1.4 7-Mode
filers. The older filer is a FAS2020 pair in a HA configuration, running ONTAP 7.3.5.1. The long
failover time does not happen when hosting the storage in this configuration on the older filer.

 

Since we are currently on 8.1.4 7-mode, we are unable to get support due to the version falling
under EOVS. We intend to move to a newer version when possible to open a support case. However in
the meantime, we've been scratching our heads on this one and are hoping to see if anyone on the
NetApp forums have any ideas/thoughts. I would be happy to answer any additional questions.

 

Thanks!


Re: SnapManager for SQL and Snapdrive Service accounts

$
0
0

Hello Nayab,

 

Thank you for the information.

Re: Usable Space FAS 2552 C-mode

$
0
0

FAS2552 with 12 X 1.2 TB Disk.. using ADP to maximize the disk space.

you can space them two different way.

 

I just run this using netapp-sysnergy and here is the screenshot of the reports.

 

FAS2552 with 12 X 1.2TB HDD

 

Screen Shot 2017-06-12 at 9.41.09 AM.png

 

Option : 1

While you create 1 Root aggr and 1 Data aggr on Each node. (to maximize the CPU Usage)

Screen Shot 2017-06-12 at 9.41.25 AM.png

As you can see the above configuration give you exactly the same amount of space on each node.. but it lost 2 parity disk and 1 spare on each node.. So total you lost 6 disk worth of space. (which is almost half of what you have, and may not be ideal for your case)

 

 

Option : 2

Create root aggr for both node.. 

But use the renamining disks to create one large aggr on one of the node. (7.69TB total usable space)

 

Advantages :-

 Aggr is larger.

Disadvantages:-

 You are not using most part of the CPU resources on node-2

Screen Shot 2017-06-12 at 9.42.38 AM.png

 

If you don't need all the CPU resources the 2nd node can provide, you may choose the Option-2

I know few of customers choose this option after considering their Storage Resources usage.

 

hope this help

 

thanks,

Robin.

REBOOT (panic)) WARNING - lost all snapvault schedules

$
0
0

Hello Community,

 

I have a filer that panicked and rebooted due to a power issue. After the reboot I noticed that all of the snapvault schedules have disappeared. Is there a way to look at previous autosupports/configs that may contain these schedules that I can just use to recreate the schedules.

 

Thanks,

Re: REBOOT (panic)) WARNING - lost all snapvault schedules

Re: Usable Space FAS 2552 C-mode

$
0
0

Hello Robin,

 

Thanks for reply,

 

how to create one big aggregate, 6-6 disk distributed each not, I did not get option to create one aggregate.

 

Regards

Rakesh Bhuvad

Re: 7MTT Support / Assistance

7-mode, CIFS, local accounts and SnapMirror

$
0
0

Previous config: IBM N6210, ONTAP 8.1.3P3, 7-mode, no multistore license

 

Current setup: NetApp FAS8020 ONTAP 8.2.4P6 7-mode, no multistore license

 

Previous and current setup:

 

  • CIFS shares located on site A are accessed using a local FAS account, i.e. 'cifs_user'
  • Site A volumes are SnapMirror replicated to Site B
  • On site B SnapMIrror destinantion volumes are shared out using a local FAS account identically named to the site A account, i.e. 'cifs_user

 

Scenario:

 

  1. SnapMirrors were broken and data written into the shares in site B under the site B local account 'cifs_user'
  2. The volumes were then replicated back to site A and the site A volumes made r/w again

Issue:

 

In site A, the data written to the shares whilst in site B is not accessible (permission denied) after mirroring back to site A.

 

From my perspective this should never have worked, so I'm not after any evidence to support this. However I am told that it has worked under the 'previous configuration' mentioned at the top of this post so I am struggling to find an answer as to how it could have possibly worked previously. For example, have there been any changes to ONTAP code that means the newer version is 'stricter' with ACL permissions? Could the ONTAP upgrade or head swap from N6210 to FAS8020 changed a system/volume option?

 

Any ideas at all on how this could have worked?


Re: 7-mode, CIFS, local accounts and SnapMirror

$
0
0

Slightly confusing setup to understand because there is no Domain in play and there is no vfiler DR in play.  I assume both those statements are true, correct?

 

 

 

 

Re: 7-mode, CIFS, local accounts and SnapMirror

$
0
0

Yes, it is a strange setup and if I were at the beginning I would have used a domain service account not local ones. However this is a mature setp, has been configured as outlined and has worked in the past. I do find this difficult to believe however the evidence (documentation) seems to suggest that this is the case.

 

There are no vFilers, the arrays are members of a domain however the shares are accessed and writtent to/read by a local account.

 

Thanks

 

 

 

Re: 7-mode, CIFS, local accounts and SnapMirror

$
0
0

It's a beyond strange setup and not best practice.  

 

But I have no idea how  your issue arises.  Doesn't make sense to me

 

Is it an App writing to the share with a local account.   

Re: 7-mode, CIFS, local accounts and SnapMirror

$
0
0

Yes, it's an app writing into the shares.

 

I can see how it shouldn't work, but I can't figure out how it worked during previous failover/failback tests. I'm thinking mass conspiracy at this moment.

Unable to get Netapp Cluster 9.1 cluster, node and aggregate related data

$
0
0

I am getting following errors while getting cluster, node, aggregate, lun and user roles related data.

 

Device OS Version is :  NetApp Release 9.1

 

com.netapp.nmsdk.ApiExecutionException: Unable to find API: cluster-identity-get
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: system-node-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: cluster-peer-health-info-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: aggr-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: aggr-list-info
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: lun-list-info
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)

com.netapp.nmsdk.ApiExecutionException: Unable to find API: security-login-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at filer.Cluster.go(Unknown Source)
        at filer.Cluster.main(Unknown Source)

 

Is there any configuration I need to change?

 

Kindly help me.

 

Thanks

 

 

Viewing all 19238 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>