Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19238 articles
Browse latest View live

Re: Snaplock Compliance aggregate

$
0
0

Thanks for the update - glad you got it sorted out!


0B Snapshots for 30+ Consecutive Days

$
0
0

I have  case opened with NetApp but their explanation of...

 

this is a cosmetic, presentational blip. Everything is working fine on the backend and those blocks in the snapshots are OK. To counter any confusion with monitoring/reporting, the volume-level snapshot consumption can be manually verified using “df -fs-type snapshot -h”.

 

...does not sit well with me.

 

Of my 50+ volumes across 15 aggregates, only one (1) volume exhibits this behavior.  Other volumes on the same aggr as the troubled volume ALL have snapshot size values > 0B

 

My daily snapshots since 20170505 all show 0B in size.  Snapshots on that volume from 20170316 to 20170504 show values in the 100s of GB.  So clearly something happened between May 4th and 5th.

 

Has anyone else seen this behavior on such a limited scale - only 1 volume?  I get the whole "block reclaimation" explanation, but I find it hard to believe that after 36 days THIS ONE volume is having trouble completing the process when all other volumes (volumes busier, more utilized, etc.) show no 0B snapshots.

 

Thank you for any feedback you may be able to lend here.

 

Snapshot list:

(Abbreviated for your protection)

                                                             ---Blocks---
Vserver  Volume   Snapshot                           Size    Total% Used%
-------- -------- ---------------------------------- ------- ------ -----
SVM_01   vol_02
                  smvi_Daily_novmsnap_20170316030002 187.2GB      1% 4%
                  smvi_Daily_novmsnap_20170317030002 83.80GB      0% 2%
                  smvi_Daily_novmsnap_20170318030001 79.50GB      0% 2%
                  smvi_Daily_novmsnap_20170319030002 94.93GB      0% 2%
                  smvi_Daily_novmsnap_20170320030002 76.66GB      0% 2%
                  smvi_Daily_novmsnap_20170321030001 101.6GB      0% 2%
                  smvi_Daily_novmsnap_20170322030002 111.7GB      1% 2%

                                          ...

                                          ...
                  smvi_Daily_novmsnap_20170429030002 87.34GB      0% 2%
                  smvi_Daily_novmsnap_20170430030003 79.46GB      0% 2%
                  smvi_Daily_novmsnap_20170501030002 88.73GB      0% 2%
                  smvi_Daily_novmsnap_20170502030002 159.5GB      1% 4%
                  smvi_Daily_novmsnap_20170503030002 77.27GB      0% 2%
                  smvi_Daily_novmsnap_20170504030002 80.65GB      0% 2%
                  smvi_Daily_novmsnap_20170505030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170506030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170507030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170508030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170509030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170510030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170511030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170512030003      0B      0% 0%
                  smvi_Daily_novmsnap_20170513030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170514030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170515030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170516030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170517030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170518030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170519030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170520030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170521030003      0B      0% 0%
                  smvi_Daily_novmsnap_20170522030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170523030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170524030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170525030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170526030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170527030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170528030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170529030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170530030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170531030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170601030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170602030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170603030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170604030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170605030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170606030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170607030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170608030001      0B      0% 0%

85 entries were displayed.

 

SLAG (Storage Level Access Guard) & ONTAP 9

$
0
0

Just a quick question I hope.  I am migrating an older ONTAP 8.2 (7-mode)  filer.  I believe it had SLAG enabled on a UNIX volume primarily to enable CIFS auditing.

This SLAG setup may have been present when the filer was initially at 7.x levels, but that is a guess and before my time.

 

I suspect that at ONTAP  9.x  SLAG is not required for auditing.  When I searched the CIFS and NFS Auditing guide for SLAG I found no references to SLAG.

 

So am I correct in believing that SLAG is nolonger required for UNIX (NFS) auditing because of the nfsv4  acls be available now ? 

 

I would ideally like to simplify the auditing and remove the SLAG if its not needed for enabling auditing anymore.

 

Rgds AndyP

Re: 0B Snapshots for 30+ Consecutive Days

$
0
0

Do you mind posting the output of following command.

 

::> vol show -vserver SVM_01 -volume vol_02 -instance

Re: 0B Snapshots for 30+ Consecutive Days

$
0
0

Actual SVM, node, volume and other identifying info have been changed to protect the innocent.... as they say.

 

Vserver Name: SVM_10
Volume Name: vol_02
Aggregate Name: aggr01_node04
Volume Size: 20.60TB
Volume Data Set ID: 1063
Volume Master Data Set ID: 2147484711
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: 0
Group ID: 0
Security Style: unix
UNIX Permissions: ---rwxr-xr-x
Junction Path: -
Junction Path Source: -
Junction Active: -
Junction Parent Volume: -
Comment:
Available Size: 2.30TB
Filesystem Size: 20.60TB
Total User-Visible Size: 20.60TB
Used Size: 18.30TB
Used Percentage: 88%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 21TB
(DEPRECATED)-Autosize Increment (for flexvols only): 512GB
Minimum Autosize: 15TB
Autosize Grow Threshold Percentage: 90%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: grow
Autosize Enabled (for flexvols only): true
Total Files (for user-visible data): 31876689
Files Used (for user-visible data): 101
Space Guarantee Style: volume
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshot Copies: 0%
Snapshot Reserve Used: 0%
Snapshot Policy: none
Creation Time: Thu Aug 21 12:59:17 2014
Language: C.UTF-8
Clone Volume: false
Node name: NODE_04
NVFAIL Option: on
Volume's NVFAIL State: false
Force NVFAIL on MetroCluster Switchover: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: true
Space Saved by Storage Efficiency: 2.41TB
Percentage Saved by Storage Efficiency: 12%
Space Saved by Deduplication: 2.41TB
Percentage Saved by Deduplication: 12%
Space Shared by Deduplication: 424.7GB
Space Saved by Compression: 0B
Percentage Space Saved by Compression: 0%
Volume Size Used by Snapshot Copies: 10.55TB
Block Type: 64-bit
Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: -
Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
Constituent Volume Role: -
QoS Policy Group Name: _Performance_Monitor_volumes
Caching Policy Name: -
Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 85
VBN_BAD may be present in the active filesystem: false
Is Volume on a hybrid aggregate: false
Total Physical Used Size: 14.81TB
Physical Used Percentage: 72%

Re: SLAG (Storage Level Access Guard) & ONTAP 9

Re: SLAG (Storage Level Access Guard) & ONTAP 9

$
0
0

Thanks for the response,  it confirms what I thought I was reading in the guides.   I have been unpicking an old filer and trying to

make sense of the configuration.

 

Just for clarity can you confirm in the new environment ONTAP 9.1 P1  the volume is : /vol/volname  security = UNIX

it is  exposed to clients via NFS exports And CIFS Shares.

 

So if I add suitable NFSv4 ACL to the volume.  I should get audit events  when the share is access from either

NFS or CIFS protocals.   Is that correct ?

 

Rgds AndyP

 

 

System Manger got worse in ONTAP 9.2?

$
0
0

Hi

I installed ONTAP 9.2RC on our lab system and I feel like System Manager got worse than in ONTAP 9.1.

Simple trick, can you please find where I can view information about snapshots in SM 9.2? I clicked menues for 2-3 minutes till find where I can do that? What does marketing say about simple interface?

There are many things changed in 9.2 interface and most of them are not straight forward to find/understand IMHO. I feel tab like interface was better.

What do you think?

Nick


7MTT Support / Assistance

$
0
0

I logged a support call via NETAPP partner program (IBM)  I believe they then pass on cases to NETAPP they do not solve.   I was

told 7MTT was not supported.

 

"I'm sorry to inform you but apparently we do not support the migration process nor the 7MTT tool."

 

So my question is I need some guidance on this 7MTT warning,  I have already migrated 14 vfiler and filer and not seen this

warning before.

 

PreCheck:

warning (21062)   Failed to collect the CIFS Share ACLs configuration from the following 7-Mode storage systems.

 

Any advice welcome.

 

Rgds AndyP

 

 

ONTAP Select : Cannot create

$
0
0

 

Hello,

 

With ONTAP Select Deploy 2.3 and 2.4, I got an error when creating a Cluster. The node cannot be created with the following error.

I tried with Web GUI and CLI.

 

How to solve this issue ?

 

 

ONTAP Select deploy Event :

 

ClusterNodeCreateFailed: One of the node create operations failed during cluster "CLU_TEST" create.

NodeCreateFailed: Node "CLU_TEST-node1" create failed: Cannot create VM 'CLU_TEST-node1' (errType=InvalidRequest).

 

 

sdotadmin_server.log :

 

2017-06-09 09:16:40,046|DEBUG response body: {"code":56,"details":"Cannot create VM 'CLU_TEST-node1' (errType=InvalidRequest)","type":"VmCreateOvfErr"}
response body: {"code":56,"details":"Cannot create VM 'CLU_TEST-node1' (errType=InvalidRequest)","type":"VmCreateOvfErr"}
2017-06-09 09:16:40,060|ERROR |client_api_helper.py|183:vm_create| Error: NodeCreateFailed: Node "CLU_TEST-node1" create failed: Cannot create VM 'CLU_TEST-node1' (errType=InvalidRequest).
|client_api_helper.py|183:vm_create| Error: NodeCreateFailed: Node "CLU_TEST-node1" create failed: Cannot create VM 'CLU_TEST-node1' (errType=InvalidRequest).
2017-06-09 09:16:46,431|DEBUG response body: {"code":13,"details":"<vmname> invalid - no vm named 'CLU_TEST-node1' on host 'nte-esx01.mqt02.mqt'","type":"InvalidArg"}
response body: {"code":13,"details":"<vmname> invalid - no vm named 'CLU_TEST-node1' on host 'nte-esx01.mqt02.mqt'","type":"InvalidArg"}
2017-06-09 09:16:46,844|ERROR |cluster_tasks.py|519:_create_nodes| Cluster [CLU_TEST]: one or more of node create tasks failed, initiating rollback
|cluster_tasks.py|519:_create_nodes| Cluster [CLU_TEST]: one or more of node create tasks failed, initiating rollback
2017-06-09 09:16:46,845|ERROR |cluster_tasks.py|520:_create_nodes| Error: ClusterNodeCreateFailed: One of the node create operations failed during cluster "CLU_TEST" create.
|cluster_tasks.py|520:_create_nodes| Error: ClusterNodeCreateFailed: One of the node create operations failed during cluster "CLU_TEST" create.
2017-06-09 09:16:46,845|ERROR |cluster_tasks.py|277:create_cluster| Cluster [CLU_TEST]: cluster create failed
|cluster_tasks.py|277:create_cluster| Cluster [CLU_TEST]: cluster create failed
2017-06-09 09:16:46,849| INFO |cluster_tasks.py|292:create_cluster| initiating rollback of failed cluster (CLU_TEST)
|cluster_tasks.py|292:create_cluster| initiating rollback of failed cluster (CLU_TEST)
2017-06-09 09:16:46,856|DEBUG |cluster_tasks.py|331:delete_cluster| Delete cluster "CLU_TEST"
|cluster_tasks.py|331:delete_cluster| Delete cluster "CLU_TEST"
2017-06-09 09:16:46,885| INFO |cluster_tasks.py|421:delete_cluster| Cluster [CLU_TEST]: all nodes in cluster are deleted, deleting cluster db
|cluster_tasks.py|421:delete_cluster| Cluster [CLU_TEST]: all nodes in cluster are deleted, deleting cluster db
2017-06-09 09:16:46,886| INFO |cluster_tasks.py|427:delete_cluster| cluster (CLU_TEST) deleted

 

 

 

Regards,

 

Cyrille

Re: fpolicy logs in the Ontap

$
0
0

Currently there is no option to see the complete details about the fpolicy but you can see the number of attampts to used by user to save and Created the screen extension file.

use "fpolicy" command it will show you the details.

Re: 0B Snapshots for 30+ Consecutive Days

$
0
0

So it appears WAFL scan block reclaim has run since 5/5/2017...  that makes perfect sense.  So the question now is WHY?  But that's for support to dig into.


For now I can manually kick off  

 

node run -node <node_name> wafl scan ownblocks_calc <volume_name>

 

to keep the snapshot size #s good (and management off my back when they scream "We have NO backups??!"

ONTAP Recipes: Easily update your existing version of ONTAP

$
0
0

Did you know you can...

 

Quickly update your version of ONTAP using OnCommand System Manager?

 

  1. Log in to OnCommand System Manager.
  2. Click Configurations> Cluster Update.
  3. Follow the steps in the Cluster Update wizard to perform the upgrade.

 UG1.png

 

For more information, please see the ONTAP 9 documentation center.

ONTAP Recipes: Secure data at rest using software encryption and the onboard key manager

$
0
0

Did you know you can...

 

Secure data at rest using software encryption and the onboard key manager?

 

If you have existing infrastructure and want to encrypt your data for compliance or standard security best practices, use the NVE software-based data-at-rest encryption feature for any SSD or disk type:

 

1. Install the NetApp volume encryption (NVE) license for each node.

 

  system license add -license-code license_key

 

2. Start the key manager setup wizard.

 

  security key-manager setup

 

3. Create a new volume and enable encryption on the volume.

 

   volume create -vserver SVM_name -volume volume_name 
  -aggregate 
aggregate_name -encrypt true

 

Example: cluster1::> volume create -vserver vs1 -volume vol1 -aggregate aggr1 -encrypt true

 

4. Verify that the volume is enabled for encryption.

 

  volume show -is-encrypted true

 

 

For more information, please see the ONTAP 9 documentation center.

ONTAP Recipes: Prioritize business critical applications using QoS Min

$
0
0

Did you know you can…

 

Prioritize business critical applications using QoS Min?

 

QoS allows you to isolate business-critical applications to ensure they meet required performance. To set up QoS Min in your system:

 

  1. Log in to OnCommand System Manager.
  2. Select the Business-Critical Volume followed by Actions, Storage QoS.
  3. Set the Minimum Throughput to 40,000IOPS .

 QoS1.png

 

4. From the performance dashboard observe that the service level guarantee of 40,000 IOPS is met and latency is reduced.

     Note that the test and dev workloads may see impacts meaning fewer IOPS and higher latencies.

 

 QoS2.png

 

 For more on QoS watch:

 

 

For more information, please see the ONTAP 9 documentation center.


Re: ONTAP Recipes: Secure data at rest using software encryption and the onboard key manager

Re: ONTAP Recipes: Secure data at rest using software encryption and the onboard key manager

Re: ONTAP Recipes: Secure data at rest using software encryption and the onboard key manager

Re: 0B Snapshots for 30+ Consecutive Days

$
0
0

"...and management off my back when they scream "We have NO backups??!" "

 

Smiley Very Happy

 

Glad you find some soltuion to this..

 

And thank You for posting the workaround. Appreciate it.

Is data from destroyed volume destroyed.

$
0
0

Hello,

 

I am wondering what happens in a multple SVM scenario when SVM A destroy a volume X.  Then if SVM B creates a new volumes Y where blocks previously belonging to volume X  are now belonging to Y.  Is the data from volume X available through volume Y ?

 

Basically, how can I reassure a very cautious tenant about the security of his data.  Is there official documentastion on this ?

 

Thank you, 

Viewing all 19238 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>