Hi,
Thank you for contacting NetApp communities, Could you show us the output of below.
storage disk show -raid-info-for-aggregate
Thanks,
Nayab
Hi,
Thank you for contacting NetApp communities, Could you show us the output of below.
storage disk show -raid-info-for-aggregate
Thanks,
Nayab
Has anyone ever experienced a Volume SnapMirror fail at the exact same spot consistently?
This is the very last mirror as part of a migration project. All other 9 VSM mirrors transferred from the same source to the same destination just fine.
During the initialization phase the transfer gets to 342.5GB and then the source reports that the snapmirror failed. Just generic message. The destination continues to say transferring for another 30 minutes before it finally stops with a failed message (also generic). The source volume is only using 5% of inodes and 80% of storage space. It is not deduped or compressed.
I have tried multiple things to troubleshoot. I have deleted the snapmirror and volume on the destination and create it again. I have created the destination volume twice the size and started the mirror. Everything I have tried stops at same 342.5GB.
On the source I created three QSMs to a bogus volume and those QSMs finished just fine.
The Source = NetApp Release 8.1.4P9D18
Destionation = NetApp Release 9.1P2
Hi,
You haven't included much information about the source volume. Do you, by chance, have active QSM sessions going to this volume?
S.
No information in ZEDI documentation.
I've seen it happen when networks have incompatible MTUs.. what does the network look like between the two systems?
It all comes down to budget, which this question doesn't cover.
The question proposes a 2.5TB DB workload, and a 50TB CIFS workload
I'd buy a shelf of 3.8TB SSDs, which would give about 70TB usable space, then use QoS on the long term archive vols if needed.
But.. I suspect the answer whoever wrote this is looking for will talk about using SAS disks for some workloads and SATA drives for others, but the cost/benefit for a simple config with just 3.8TB SSDs vs multiple shelves of different drive sizes should likely win out at this point. There's no point making things hard for yourself.
Spinning SAS will eventually go the way of the dodo - think of our 15TB SSDs - 1 shelf of them is usually cheaper to buy and operate than 15 shelves of 900GB SAS drives. SATA drives meanwhile keep getting larger, and so IO capacity of individual drives is reducing, so even they will turn into secondary only storage eventually.
Obligatory "imho" for all this stuff, for certain values of "h".
Hi,
Thank you for contacting NetApp communities,
My Experience :- If you delete a load of data from the client-side (eg NTFS) the client marks the blocks as free, as opposed to physically zero'ing out the data, right? Down at the storage level WAFL has no way to know these blocks have been deleted, so when you write more data to the LUN it will consume new blocks in the volume. Typically, as a LUN ages, you will find the NetApp side will show the LUN at, or close to 100% full, but the clients filesystem may still have plenty of space. This is by design, and often not a problem - although it looks a bit odd at first. Check out Snapdrive's Space Reclaimer feature if using Windows - this will reclaim those free blocks at the WAFL end if required.
As i see you are using 7.3.2 where the space relclamation is only possible using SNAPDRIVE. Please refer the KB below which shows how to reclaim space from LUN without SnapDrive but articles refers to Ontap Version 8.1.2 which mean you need to upgrade least to this version to make use.
Reference :- How to Reclaim LUN space Without SnapDrive
Note:-If you don't want have luxury of downtime as above KB needs to have downtime to reclaim space. I would suggest upgrading to Ontap 8.2 which supports AUTO RECLAMATION. Where once you enable auto reclamation on a LUN whenever there are blocks freed up at the host side NetApp will automaticlly reclaim the space marking the blocks as free at the controller side.
Thanks,
Nayab
Hello,
I a very new to NetApp so i will do my best to explain our current setup.
Release Version - 8.1.4 7-Mode
Model - 3210 (x2 HA Pair)
VSES - 1.2.0.163 (Mcafee Enterprise Storage 2xScanning Servers)
We have created a private Interface group for the AV traffic - i add the private network IP's to the vFilers.
We have also set the following vscan variables on the filer (Do they need to be set on a vFILER level too?):
VSCAN OPTIONS
TIMEOUT = 10 SECONDS
ABORT_TIMEOUT = 50 SECONDS
MANDATORY_SCAN = OFF
VLIENT_MSGBOX = OFF
SCANNER Policy
TIMEOUT = 40
THREADS = 150
Nothing is enabled as of yet, we had issues previously where we had file locks and i believe this is down to the Mandatory_scan = On (Now off!)
Question is, how do i limit what is scanned? do i have to enable vscan on the filer and vfiler ? & Add the private network IP/s to the Mcafee EPO policy ?
Also if anyone could recomend any changes to our vscan details above it would be appreciated
Thank you,
Phil
Hello,
We have a small 9.1 filer with SnapLock Enterprise enabled. In configuration we overlooked requirements for priviledged deletes - that it requires snaplock compliance volume & aggregate. We are in UAT now and I need to delete all the files written to the volume so far. I have only one spare disk. Can I destroy snaplock enterprise aggregate? (then I could create it again with less disks, giving me enough disks for extra aggregate).
You should be able too.
I would like to know this process too, because on few ONTAP 9 clusters, we've got strange behaviors with the discovered-servers status.
All of them are marked as Kerberos,MS-LDAP and MS-DC but only MS-DC seems to be "OK" the other one are all "Undetermined".
I want to change the IP address of the DNS server that was set up when OntapSelectDeploy was constructed.
I checked the user guide and the command list of the real machine, but I could not find the change command.
# DNS server information is written in the /etc/network/interface
# I'll try to modify, but this file is READ ONLY!
# edit /etc/network/interface
Could you tell me how to solve it?
Hello Team,
I was new in configuration and while i was setting up and all done the foillowing error was posting in GUI session
Error 500
Servlets not enabled
NetApp Release 7.2.4L1: So please help me out, thanks in advacne.
Best Regards,
P Uday Prasad
Hi,
Please see the following KB article:
/Matt
Try this. I am using this for my environment, but I only print where error count is more than 1000. You can change it as you need.
import sys
from NaServer import *
import xmltodict
import json
filer_name = sys.argv[1]
filer = NaServer(filer_name,1,6)
filer.set_admin_user("admin","*********")
cmd = NaElement("perf-object-get-instances")
cmd1 = NaElement("net-port-get-iter")
port = filer.invoke_elem(cmd1)
obj = xmltodict.parse(port.sprintf())
jdump = json.dumps(obj)
jload = json.loads(jdump)
xi = NaElement("counters")
cmd.child_add(xi)
xi.child_add_string("counter","rx_total_errors")
xi2 = NaElement("instances")
cmd.child_add(xi2)
for h in jload['results']['attributes-list']['net-port-info']:
if h['port-type'] == 'physical':
xi2.child_add_string("instance",h['port'])
cmd.child_add_string("objectname","nic_common")
err = filer.invoke_elem(cmd)
a = xmltodict.parse(err.sprintf())
jd = json.dumps(a)
jl = json.loads(jd)
for x in jl['results']['instances']['instance-data']:
n = int(x['counters']['counter-data']['value'])
if n > 1000:
print x['uuid'],
print x['counters']['counter-data']['value']
else:
pass
Hello,
I dont have the access for the link please share the entire info here please. its urgent as out of running from Storage space, thanks in advance.
Hi,
Here is the external URL to the KB article.
If your storage controller is not accessible via FilerView then you can access it via SSH remotely or from the console.
I'd advise you raise a support case to resolve your issue.
/Matt
Hey,
Still i was unable please help me as i was running out of storage space i need to push this into the production.
Team Please any one help me
Hi,
Thanks for sharing this information.
Cheers