Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19252 articles
Browse latest View live

Re: Get-NcVolSize command not reporting volsize correctly when > 1024GB using Ontap Powershell T

$
0
0

Glad it's working! 

 

If I had to guess, I'd say it is a strange bug in the module for those two cmdlets. 


Re: Issue with aggr creation

$
0
0

hi there 

thank u for your time

i have used aggr create agg1 -R 15000 50 and it worked but i have another issue that is i have 50 15k RPM disk and other 32 7200 RPM disks but i want to mix them to have big aggregate but i cant mix it so is there any solution to have both in one aggregate or i need to attatch anoth disk shelf

 

 

Regards

Re: Decommission nodes/LIF's from a Cluster and remount NFS

Re: Decommission nodes/LIF's from a Cluster and remount NFS

$
0
0

 wrote:

Make sense?


No. You cannot remove nodes that host volumes. If volumes are already relocated to another nodes, any traffic to LIFs on these nodes will go via interconnect to another node(s). So moving LIFs to nodes that actually host volumes will actually improve situation by avoiding indirection via cluster interconnect.

Re: Decommission nodes/LIF's from a Cluster and remount NFS

$
0
0

 

You are right. Unfortunately, we didn't do what you suggested. Now throughputs to LIF's on two specific nodes are much heavier than the others. 

 

Question: How do I know if throughtputs to these LIF's are too  heavy, and causing performance issues? Or how heavy is too heavy? To me, there seems no way to tell latency on LIF's. 

 

 

Re: Issue with aggr creation

$
0
0

You do not want to mix them

 

15K = high-speed 

7200 = SATA

 

VERY different performance characteristics. Not even close to a good idea to mix.

 

If you *must*, I am going to leave it to you to research (I will not dictate an answer here as I personally think it is a really REALLY bad idea) how to do it. Hints: you must use CLI and more than likely something other than "admin" level.

Ontap Simulator in Google Cloud

$
0
0

Hi,

 

I would like to test something with Ansible Tower and ONTAP.

 

It would be great to install the ONTAP Simulator in Google Cloud where I already have Ansible Tower.

 

 

I can see that it's possible to import .OVA files into GCP.

 

Has anyone tried to import the ONTAP Simulator .OVA  ?  Is it possible ?

 

Regards,

Jakub

 

Re: Issue with aggr creation

$
0
0

yeagh i know

but i can afford the risk as i am using it for not very important data but any whay when i try to do that it says u can't mixe it so i was just asking is it possible or not and that's it as i will attach another disk shelf .

 

thnak u for your time


Issue with snapmirror

$
0
0

hi there 

we have NetApp 8.1 7 mode

i am trying to initiate snapmirror . i am destination and source is other remote side 

i have issue wtih snapmirror initiliazation i did following

 

vol create vol_dr aggr02 9974g

qtree create /vol/vol_dr/qtree_01

snap reserve vol_dr 0

snap sched vol_dr 0

vol options vol_dr nvfail on

vol options vol_dr minra on

vol restrict vol_dr

snapmirror initialize  -S soucefiler:vol_source destinationFiler:vol_dest

[destinationFiler:replication.dst.err:error]: snapmirror: destination transfer from sourcefiler:vol_source to vol_dr :

transfer aborted because of network error.

transfer aborted because of network error 

 

now i have ping sourcefiler and it is ok and traceroute sourcefiler is also ok and reaching the remote filer

 

pl help i dont know where is the issue

 

regards

 

 

 

Re: Issue with snapmirror

$
0
0

we have no bidirectional replication. it worked fine as i did many time this mirroring .so what chnaged? i have destroyed the destination voloume and re creat it but this time the aggregate is diferent which should not worrying?

 

plz guide 

 

 

Re: Snapmirror (transfer aborted because of network error)

$
0
0

hi there 

i have the same issue but checks all the things in this post but issue still persisits

 

any help wll be appreciated 

 

 

Re: Issue with snapmirror

$
0
0

the rdfile /etc/log/snapmirror says ---transfer aborted due to network error

snapmirror.access *

 

any more info required?

 

 

Regards

 

 

Re: Snapmirror Error, Transfer aborted: transfer aborted because of network error.

$
0
0

i have the same issue still not resolved but i am getting a message that e3a link is down so i check the /etc/rc file and entry like following exists

vif  create  multi -b  ip <name>  e3a e3c

 

any help 

Re: Issue with snapmirror

$
0
0

Hi,

 

First thing:
I see you have created qtree on the destination, what are you trying to achieve ?

 

Snapmirroring to where ?
1) Vol to vol
2) vol to qtree
3) qtree to qtree


To replicate complete copy of the source volume to dest vol:
dest> snapmirror initialize -S systemA:vol0 systemB:vol2
Note:destination volumes must be restricted and destination qtrees must not yet exist.

 

 

To replicate non-qtree data from a source to a destination:
Note: Do not use the vol restrict command on a qtree. If you are initializing a qtree, run the following cmd.
dest> snapmirror initialize -S source_system:/vol/source_volume/- dest_system:/vol/dest_volume/qtree_name

 

 

To replicate qtree data from a source to a qtree destination
systemB> snapmirror initialize -S systemA:/vol/vol1/qtree4 systemB:/vol/vol1/qtree4bak

 

 

Furhter:
Is there more info available in the snapmirror log file:
/etc/log/snapmirror

Also check: (When you initiate snapmiror initialize)
/etc/messages

what is the setting on your source filer?
source> options snapmirror.access


Thanks!

SnapCenter Host Log Directory With Multiple SQL Instances

$
0
0

Hello,

 

We are in the midst of pushing out the SnapCenter SQL plug-in. We have at least one standalone server with two SQL instances. Under SMSQL we set up a SnapInfo LUN for each SQL instance. Under the SnapCenter plug-in, it appears that I must set up a single LUN for the whole host, not per instance. Is that accurate? If not, how would I add the second LUN since there's only one "configure host directory" link?

 

Adding to my confusion is the fact that there is a separate host log directory per SQL instance if the instances are in a Windows Failover Cluster, at least if I'm reading the Best Practices Guide correctly. It appears that you fill out a "FCI" log directory for each instance in this case.


Upgrading a single node "Cluster" whilst live

Re: Upgrading a single node "Cluster" whilst live

$
0
0

Single node clusters require downtime (a reboot) to upgrade.   

 

check step 9 on the link you posted

- The node is rebooted as part of the update and cannot be accessed while rebooting.

 

note: I would pause any snapmirrors before you start the upgrade. 

Re: Upgrading a single node "Cluster" whilst live

$
0
0

Obvious exception is two node metro cluster.

Re: Problem with Snapmirror through ipSec tunnel

$
0
0

I have exactly the same issue. Can you please let me know how did you resolve this one?

Re: Issue with snapmirror

$
0
0

thank you for your time

 

yeagh there is no need for qtree so i elimante that from vol 

 

but issue resolved just now but still dont know how?

i did following

1. i put the source filer ip in /etc/snapmirror.allow (which is not required as i am the destination and it is required on source side)

2. vol options vol_dr nosnap on

i just did above and then initalize so it worked but at the same time i have asked newtwork resource to check at their side so they confirm that no port block etc  so i dont know how it is solved

 

plz reply 

 

 

Viewing all 19252 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>