Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19253 articles
Browse latest View live

Re: what LIF will be used in command "net ping -node node-name -destination remote-IP"?

$
0
0

I believe it uses the node mgmt IP.  

 

I perfer to force the vserver and lif, rather than the node, when pinging,  that way you're sure where it's going out.  


Powershell Toolkit

$
0
0

Hello all,

 

I am in need of assistance with Powershell Toolkit for Netapp. I need a command that would be equivalent to:

environment status chassis all

 

Please assist if you are able to .

Re: what LIF will be used in command "net ping -node node-name -destination remote-IP"?

$
0
0

I thought so too, but it is not, because if I use "net ping -vserver cluster -lif node-mgmt-ip -destination remote-ip", it works,  which is basically same as "net ping -node node-name -destination remote-ip".

 

So, it looks "net ping -node node-name -destination remote-ip" is not using the node-mgmt-ip.

Re: what LIF will be used in command "net ping -node node-name -destination remote-IP"?

volume latency

$
0
0

Hi

 

I ran the below command to check the latency on a volume and notice the network is high. What is this meaning to me? Network congestion?

 

'qos statistic volume latency show -vserver - volume'

 

 

Re: Cannot set DNS on Node

$
0
0

Thank you very much. This cleared my doubt.

Re: volume latency

$
0
0

it means that the delays are coming outside the processing layer. It would be the nblade, network or host

I am sure you have also noticed that the response time from the disks is not optimal

Re: CIFS Discovery Mode

$
0
0

Thank you. The errors appear to have diminished so I may be good for now. Will update this thread if things change.


FAS2700 upgrade

$
0
0

Hello,

 

We have been using Netapp FAS 2040 in HA mode with 2 Controllers. And having 33 Disks (500GB/1TB/2TB)s providing effective Size of 11 TB. Netapp operates on 1 GB Network with NFS connectivity. 

 

We are looking for a new mid-range solution as the Model FAS2040 is out of warranty and support. The recommendation we received is FAS 2720 with ( 12 x 2 TB) disks. 

 

1. Firstly I would like to know if this suggested upgrade is practical? Has anyone experience using this model FAS2720 or FAS2700 Series?

 

2. I feel 24 TB will be not sufficient, considering the fact that Netapp's RAID-DP will have 3 Disks reserved plus there will be some disks kept in reserve as well. So lets says 3 Disks for reserve and 3 for RAID-DP , giving approximate size of 6 or 7 x 2 TB ~ 14 TB. I feel then we should go for 4 TB Disks giving us, 12 x 4 TB ~ 48 GB or effective  6 x 4TB = 24 TB Size.

 

I would appreciate your comments/suggestions on this topic. I am currently reading the datasheet of FAS2720.

I would update this case with more questions as and when required. Thanks in advance.

 

 

Regards,

admin

Space Savings Drops After Implementing Volume-level Background Deduplication on AFF

$
0
0

We recently upgarded from 9.2P4 to 9.3P10 on our AFF 4 node 8080 cluster. One of the features I wanted to implement was volume-level background deduplication. I removed scheduling from every volume and set everything to auto policy. I thought it was great that I wouldn't have to manage scheduling of deduplication jobs anymore.

 

After several weeks, I'm noticing a sharp uptick in the number of volumes alerting that they are running out of space. In each case, my first instinct is to run a quick manual deduplication job just to make sure I really need to resize the volume. In every case so far, the alerting volumes were "deprioritized" by the auto policy so I couldn't even run dedupe manually without promoting the volume.

 

As I reviewed this situation, I noticed how the "auto" policy actually works. I thought it effectively eliminated the need for scheduled deduplication - i.e. that essentially each volume would just do the inline dedupe/compression and get the same benefits it would have had previously had I done that + scheduled dedupe. What I discovered was regular deduplication jobs running at very random times (in addition to the inline dedupe). Those random jobs might run hours after the nightly backup, so it misses some of the savings it would have had if it were run before snapshots were generated.

 

The last straw was this morning when one of my VMware datastore volumes alerted that it was low on space. Even it was deprioritized, and these volumes have the higest rate of dedupe/compression savings on our cluster. Although I don't want to, I'm starting to think I need to revert this feature and go back to scheduling.

 

Does anyone have insight into this issue? In particular, are there any improvements to this feature in 9.4 or 9.5? Any suggestions or feedback is appreciated!

Re: FAS2700 upgrade

$
0
0

There are two models currently in the 27xx series.  the 2720 and 2750.   the 2720 is the more entry level model,  typically using SATA disks,  where the 2750 is SAS/SSD typically.  

 

As far as sizing goes,   either will work,  the 4T disks will give you more room to grow,  but do you plan on growing that much and is it worth the extra upfront cost?    

 

Is the 2040 using any dedupe on the volumes? 

 

Re: FAS2700 upgrade

$
0
0

 

Thanks for the reply. Here are my comments.

 

There are two models currently in the 27xx series.  the 2720 and 2750.   the 2720 is the more entry level model,  typically using SATA disks,  where the 2750 is SAS/SSD typically.   

 

We are currently having only SAS disks and was looking for both solutions 1. SAS and 2 SSD. So we got 2 offers . 1st for 2720 and other one AFF A200 HA with 24 x 960 GB SSDs. But the cost of it just goes beyound sky.

So its good to know that 2720 only supports SATA disks, otherwise I was thinking of asking for SSD disks for this model.

 

As far as sizing goes,   either will work,  the 4T disks will give you more room to grow,  but do you plan on growing that much and is it worth the extra upfront cost?    

If not 4TB then maybe lesser than that maybe 3 TB. But I really feel 12 x 2TB ~  18 TB for 3-4 Years will not be sufficient. 

I do not see exponential growth but atleast , 12 x 3 TB ~ 25-27TB should suffice for 5-10 Years.

 

Is the 2040 using any dedupe on the volumes?

What is this?

Re: It appears "volume move" will cause massive data loss on large volume

$
0
0

Has there been a support case opened on this? If so, can you provide the case number?

Re: FAS2700 upgrade

Re: FAS2700 upgrade

$
0
0

Thanks for the links. I will check and get back.


Re: FAS2700 upgrade

Re: FAS2700 upgrade

$
0
0

Thanks. I will check that as well too.

 

Any idea how the migration of data should take place between old and new Netapp Filer?

Re: How to configure NFS logging

$
0
0

You need NFS v4 to enable auditing.

Re: FAS2700 upgrade

$
0
0

what's currently running on there.  I saw you mention NFS,   but is it VMware related or Linux hosts ?   

Re: It appears "volume move" will cause massive data loss on large volume

$
0
0

Will do.

 

Have a case open and looking at logs as far back as I can go.

 

I've also started my own test, monitoring the volume move ref_ss snapshot.  So far so good.

 

I'm touching files, taking a snapshot, and sleeping for 24 hours, while the volume move is waiting for a cut-over.  It creates a new ref_ss snapshot every few minutes, at the bottom of the stack. 

 

So it appears to be working as intended.  I'll let you know what support has to say after looking at data.

TasP

Viewing all 19253 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>