Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19240 articles
Browse latest View live

Re: Ontap 8.x and BTRFS

$
0
0

Thanks, we were thinking that we'll need to carve some dedicated disk space for BTRFS, just not sure i we should use iSCSI or NFS.


Is there a Volume Counter that shows Random versus Sequential IO percentage

$
0
0

Is there a Volume Metric that shows Random versus Sequential IO percentage. I was able to get that Counter at the SVM level but need to see it per volume:

 

statistics show-periodic -object readahead -instance <SVM> -counter seq_read_reqs | rand_read_reqs

Nfs Vol Move

$
0
0

Hi,

 

We have 4 node cluster(ontap 9.1) and we are planning to move our nfs vol to new node aggr which is newly added.

 

The first question is, can we move nfs vol nondisruptively to other node?

The second question, after move vol to new added node, which lif is going to serve to nfs vol? Because there is no lif option when you moving volume ?

 

Thanks,

Tuncay

 

 

NetApp Storage Monitoring from two different OCUM/OCPM and DFM

$
0
0

Hi There,

 

We are planning to have DC-DR for our NetApp monitoring tools OCUM & OCPM and DFM.

 

Primary site:  Site-A

Secondary site: Site-B

 

We will deploy 3 VMs in each location  [2 VMs will be used for OCUM/OCPM for C-Mode controller monitoring and 1 VM will be used for DFM which will collect the data for 7-mode controllers]

 

Both Site-A and Site-B monitoring servers will collect the data from end devices (meaning: same controller will be integrated with two different monitoring infra).

 

Kinldy let me know if this dual polling is possible. Also, confirm there will not be any impact in storage array performance.

SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

First, some background: I have just upgraded to OnTAP 9.1P8 and want to take advantage of the SVM DR feature on our CIFS SVMs.

 

The Data Protection guide indicates I should ensure that all volumes, including the root volumes, have the same names in the source and destination SVMs. Currently I have root volumes on the source and destination SVMs, and on each SVM I use root volume protection as recommended in NetApp documentation.

 

My question: how can I incorporate the source SVM's root volume into this without disrupting the SVM root volume protection on the destination? Or, should I replicate the source root volume and keep a separate volume for the destination's root volume?

 

Also: is it accurate to assume that the CIFS Share Information is stored in the root volume, and that's the reason for replicating it offsite?

 

Finally: during DR tests, we currently flexclone volumes for CIFS Servers and then use the results of a PowerShell script to create commands in an Excel spreadsheet to rebuild the shares. Very convoluted. With SVM DR I'm hoping there's an easier way to conduct the test. Any suggestions?

 

Thanks in advance for any thoughts / suggestions on these questions!

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

SVM-DR is super simple.  We use it with identity preserve discard network.  It's a derivivative on 7-mode's vfiler dr.  

 

If you are talking about LS mirrors on the root, don't worry about it.

 

Just follow the guide for svm-dr and you will be set

 

Forget that flexclone workflow as well. We do full failovers and failbacks. That's the only way to truly test DR

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Thanks jgpshntap! Unfortunately we can't do failover/failback because our DR network is only available during DR tests and/or actual DR situations (we rent the space in a datacenter from IBM). Because of that we do flexclones, although with SVM DR I would have to modify the plan to flexclone to a different SVM (as I understand it).

 

When you say "dont worry about it" re: LS mirrors on the root, what do you mean? Should I not have LS mirrors on the root of the destination (DR) SVM?

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

If we were in a true DR we would establish LR mirrors on the root.  

 

So you are still stuck with flexclones, you absolutely would have to present to a new svm.

 

 

Again, not a true DR.    Tell MGMT  DR test would be if you actually failed over full workload with full DR network.  How else do you know if stuff really works...

 

 


Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Thanis JGPSHNTAP. During our tests we have full DR network, however it's isolated from our main network so there's no network conflict. We actually use the same IP's as production so we have to isolate the network.

 

I don't think I'm expressing my question very clearly. Let me try again: on both source and destination SVM's, I currently have root volumes on both with root volume protection in place for both. Documentation indicates I should replicate the source root volume to a matching volume on the destination with the same name. My question is, should I:

 

(a) remove the existing destination root volume and root volume protection, and replace it with a single root volume destination with the same name as the source root volume? 

 

Or -

 

(b) leave the existing destination root volume and root volume protection in place, and create a new volume with the same name as the source root volume for SVM DR?

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Never mind, I think i found the answer. The Express Guide says the following after creating the destination SVM:

 

------------------------------------------------

The destination SVM is created without a root volume and is in the stopped state.

------------------------------------------------

 

So apparently I would delete the root on the destination.

 

I'm starting to think SVM DR isn't going to work for us. Looking further in the guide, it looks like you have to set up a CIFS Server on the destination which I won't be able to do since there's no live data network unless we're in the middle of a DR test.

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

If you have relationships in play and you want to convert them to an SVM-DR relationship, that's pretty easy

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Thanks, yeah I see that in the guide.

 

To clarify in case someone searches this topic - I was mistaken about having to create a CIFS Server on the destination. That's only if I set identify-preserve to false which I'm not planning to do.

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Testing not going well. Anyone have insights on the following?

 

I have a test source SVM and test destination SVM. Each side has a root volume and a single test data volume. I verified the names of both volumes are the same on each side. I have a standard snapmirror job set up for the test data volume, to simulate conditions in my existing CIFS SVMs.

 

I successfully create the SVM DR relationship, but when I try to resync I get the following:

 

-----------------------------------------------------------------------------

Error: command failed: There are one or more volumes in this Vserver which do not have a volume-level SnapMirror relationship with volumes in Vserver <source_vserver>.

-----------------------------------------------------------------------------

 

I thought the issue might be the load sharing volumes on the source. So I ran the volume modify command with the -vserver-dr-protection unprotected command, but got the following:

 

-----------------------------------------------------------------------------

Error: command failed: Modification of the following fields: vserver-dr-protection not allowed for volumes of the type "Flexible Volume - LS read-only volume"

-----------------------------------------------------------------------------

 

I then thought, maybe i need to identify the root volume as unprotected, and I got the following:

 

-----------------------------------------------------------------------------

Error: command failed: Cannot change the protection type of volume <volume> as it is the root volume.

-----------------------------------------------------------------------------

 

Finally, I changed the name of the destination root volume to something different, and set up a standard replication job for the source root volume. When I try to resync I get:

 

-----------------------------------------------------------------------------

Error: command failed: The source Vserver root volume name <volume> is not the same as the destination Vserver's root volume name <volume>. Rename the destination volume and then try again.

-----------------------------------------------------------------------------

 

I am at a loss. Any ideas?

Re: Nfs Vol Move

$
0
0

 

Tuncay -

 

The volume move command is disruptive, mostly.

 

There's the time it takes to do the last snapmirror transfer while we quiesce the IO on the front end before updating the VLDB to point to the new volume location.

The default cutover delay is 30 seconds.

See the man page for the command: http://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-cmpr-920%2Fvolume__move__start.html

 

As for your second question ... NFS 3 will continue to use the same LIF.

Best practice would be to migrate the LIF to the new node when you move the volume.


I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, Fast Lane US http://www.fastlaneus.com/
(P.S. I appreciate 'kudos' on any helpful posts.)


 

Re: flow control settings on ifgrp and the underlying physical ports

$
0
0

I am not seeing inheritance of the flow control.  I have flow control off on all underlying ports and the ifgrps are all at 'Full'  

We did not set this and the ports had flow control none from the start.

John

 


Re: Nfs Vol Move

$
0
0

The volume move command is disruptive, mostly.

 

I think you mean non-disruptive Smiley Happy

Re: SVM DR and the physical location of CIFS Share Information in Cdot

$
0
0

Hey man,

 

What I don't get is why do you care so much about the root volume? SVMDR is just the exact same thing you have as on the source system. If you change/modify/manipulate your volumes in the SVMDR you're bound to fail. So if you'd be in a DR you'd break the vserverdr / svmdr snapmirror on the destination and then fire up the vserver/svm in the destination et voilà you have everything running active in your destination datacenter.

 

So first error you get is an indicator that the volume information in your SVMDR is not identical anymore with the source SVM - there must have been a modification.
vol show -vserver svmy
vol show -vserver svmy_dr

should give you an idea of what is different. So this is my go to document regarding SVMDR:

create

DEST> vserver create -vserver svmy_dr -subtype dp-destination
SRC> vol show -vserver svmy -volume *
DEST> vserver add-aggregates -vserver svmy_dr -aggregates // this is for disk type for example whether you want to have the data on SATA or SSD
DEST> vserver peer create -vserver svmy_dr -peer-vserver svmy -applications snapmirror -peer-cluster v
SRC> vserver peer accept -vserver svmy -peer-vserver svmy_dr
DEST> snapmirror create -source-vserver svmy -destination-vserver svmy_dr -type DP -throttle unlimited -policy DPDefault -schedule hourly -identity-preserve true
DEST> snapmirror initialize -destination-vserver svmy_dr

failover

DEST> snapmirror quiesce -destination-vserver svmy_dr
DEST> snapmirror break -destination-path svmy_dr
SRC> vserver stop -vserver svmy
DEST> vserver start -vserver svmy_dr

and you are running in destination with the full configuration of all your shares, exports, interfaces etc.
now you have to decide whether you want to resync back to the state you had on the source or you'll have to create a new svmdr to mirror everything back to your previous source. So:


SRC> snapmirror resync -destination-vserver svmy

OR
OLD_SRC> snapmirror create -source-vserver svmy_dr: -destination-vserver svmy -type DP -throttle unlimited -policy DPDefault -schedule hourly -identity-preserve true
// Day X when mainsite will be switched back
OLD_SRC> snapmirror quiesce -destination-vserver svmy_dr
OLD_SRC> snapmirror break -destination-path svmy_dr
OLD_SRC> vserver stop -vserver svmy_dr
SRC> vserver start -vserver svmy

When you delete volumes on the source SVM it's abit more tricky to get the SVMDR mirror running again:

delete svmdr volume

DEST> snapmirror break -vserver svmy_dr
SRC> snapshot delete -vserver svmy -volume volx -snapshot * -ignore-owners true
SRC> snapmirror list-destinations -source-vserver svmy -source-volume volx
SRC> set diag
SRC> snapmirror release -destination-path svmy_dr:volx -relationship-id z -force
SRC> vol offline volx -vserver svmy
SRC> vol delete volx -vserver svmy
DEST> snapmirror resync -vserver svmy_dr

if volume still exist at destination

DEST> snapmirror break -vserver svmy_dr
SRC> vol offline volx -vserver svmy_dr
SRC> vol delete volx -vserver svmy_dr
DEST> snapmirror resync -vserver svmy_dr

hope this clears some of the situation or if not please give us a bit more background.

Cheers,

axsys

Volume size utilization

$
0
0

Hi,

 

We have created a Volume of 3.6 TB with UNIX Security Style, Deduplication and Compression enabled & Enable Fractional Reserve(100%) unchecked in OnCommand Systems Manager. 

 

There are no SnapShot or SnapMirror being configured for the volume. 

 

OS is Data OnTap 8.2.4P4 7-Mode.

 

Within that we have created a LUN of 3 TB which is mapped to a Windows Server. Current space utilization for LUN is 58%.

 

Current volume utilization is 95% with 170 Gb free.

 

I want to understand where this 400+ GB is being used.

 

Snapmirror filesys-size-fixed

$
0
0

Hello Guys,

 

Got a:
Source: On-premise system FAS2520 with Ontap 9.1.P8
Destination: AWS system with Ontap with 9.1

we have one volume snapmirror from onpremise to aws, no vserverdr, just a simple snapmirror.

I thought that the filesys-size-fixed would always be true on the destination but this does not seem to be the case.. I went then to the destination:

snapmirror break -destination-path vserver:volume
vol modify volume -vserver x -filesys-size-fixed true
vol show -vserver x -fields filesys-size-fixed: output: true
snapmirror resync -destination-path vserver:volume

and suddenly the filesys-size-fixed is false again... hence he's not automatically increasing the size on the destination volume if I increase the size in the source. Anyone has ever experienced that? I have various svmdrs running on other systems and there I have configuration on source with filesys-size-fixed true and false but the destination is always on true.
I'm not sure whether it has to do with the Ontap version or is a configuration issue?

 

Cheers,

axsys

Re: Backing up CIFS shares (currently using Unitrends) Options?

$
0
0

Anyone?  If I cannot find something, another consideration is replacing my NetApp with different storage, and converting the Netapp to storage for my backups.

Viewing all 19240 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>