Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19214 articles
Browse latest View live

Re: To recreate one and only one aggr on same disks?

$
0
0

I wasnt 100% sure. Thanks for clarifying.   

 

In that case yeah,  you could do the following. 

 

Add a temp shelf

create a aggr_temp just using that shelf.  

Copy your data to that shelf.  

Delete the aggr1.  

Reconfigure partitions how you want. 

Create new Aggrs 

 

copy data back to new aggr(s). 

 

Remove aggr_temp 

unown temp disks.   

Hot-remove temp shelf.  

 


Re: To recreate one and only one aggr on same disks?

$
0
0

>Reconfigure partitions how you want. 

>Create new Aggrs 

 

This is the part I have questions with. Remember I have only destroyed those partitions on the data aggr, partitions sharing the same disks are not initialized and still active in root aggr. How do I restore partitions were in data aggr before it got destroyed?

Re: To recreate one and only one aggr on same disks?

$
0
0

After the data aggr is deleted,  the "disks" (partitions)  will just be returned back to the spare pool.  The root aggr will remain intact.

 

zero them out and they'll be ready to be added to a new aggr(s)   If you need to reassign the disks/partitions betweent the two controllers  you use "-data"  flag with the disk sub commands.  

e.g: 

disk removeowner -disk x.x.x -data true
disk assign -disk x.x.x -data true -owner NODEx   

 

This command to show what is assigned where. 

 disk show -fields data-owner, root-owner

 

How do you plan to create the new aggr?  and why is it needing to be deleted/recreated.   I can only think of a few reasons to do that.    

Re: Advanced disk partition for FAS8200

$
0
0

 Thank you for the quick help.

 

 

event pvif.alllinksdowntrap does not support SNMP trap

$
0
0

Dear Experts,

I have a case about SNMP errors.

the version is 9.5P1 

Running the following command

event destination modify -name traphost -snmp
gives following error:
Error: command failed: The destination traphost cannot include an SNMP field because it is assigned to event pvif.alllinksdowntrap and this event does not support SNMP trap.  

Document BR15038 already has been tried

 

this KB was already tried with no success: Error when removing SNMP trap hosts on clustered Data ONTAP  
https://kb.netapp.com/app/answers/answer_view/a_id/1071259/loc/en_US#__highlight

 

I kindly please for some suggestions.

Thank you in advance.

Re: Volume is full or Volume crossed autodelete threshold.

$
0
0

this looks like its through snapdrive, by the sounds of it, it wants to restore the lun. This seems to need enough free space in the active file system so that it can copy it out of that snapshot. Check the lun size and check the free space.

ontap upgrade two major version on the same time

$
0
0

My current Ontap version is 9.1P5.

I want to upgrade it to be ontap 9.5P2.

According to upgrade advisor, I need to upgrade it to be 9.3 first.

Therefore, on the same night, I will upgrade ontap from 9.1P5 to 9.3P12 and then 9.5P2.

As I know that, after i upgrade it to 9.3P12, there are several firmware will be update automatically if the version in your cluster is older than the firmware that is bundled with the ONTAP upgrade package.

They are BIOS, Flash Cache, SP, Disk, and Disk shelf.

My question here is 

Do I need to wait for all those firmware upgrade completed and then upgrade it from 9.3P12 to 9.5P2 or I can continue to upgrade it to 9.5P2 after veified the version - 9.3P12 and HA status, aggregate, and lifs are home port?

9.3P12 release on 5/27/2019

9.5P2 release on 3/24/2019 which is earlier then 9.3P12.

Does it matter for my upgrade ? 

 

 

Re: ontap upgrade two major version on the same time

$
0
0

I would pick the latest P releases.   But I would not jump from 9.1 to 9.3 to 9.5 in one day.    When I run through this with customers I typically will wait a week or so between the major updates.     

 

But to answer your question, yes you will need to wait for all the background firmware updates to finish before moving on.     The disk and shelf firmware could be the same between the two versions,   but SPs and other firmware will be different between 9.3 and 9.5.    

 

I would also run through the IMT to verify that everything in your environment has compatibility with the new versions of ONTAP.   


Re: cluster image package get -> Error: Peer certificate cannot be authenticated with given CA ce

$
0
0

Would you mind sharing how you fixed this issue? I am having the same problem and I dont seem to be able to find other resources out there for the exact same issue.

Re: Deleting stale cluster peers, ONTAP 9.1 & 9.3

$
0
0

Darn the luck, it didn't work. I was able to recreate the SVM peer relationship, recreate DP volumes, re-establish and initialize the relationship, and then delete-and-release properly, but there's still an artifact of that relationship on the source cluster. I might have to reach out to support after all.

Re: ontap upgrade two major version on the same time

$
0
0

According to NetApp support, I can continue to upgrade from 9.1 to 9.3 and then 9.3 to 9.5.

I did it with two nodes under the cluster.

After I upgrade from 9.1 to 9.3, I verifyed HA and its related objects and upload autosupport to Active IQ, and run the config advisor.

I see no issue from both and after 30 minutes, then I continue to upgrade it to 9.5.

It runs fine without any issue until today...5 days already...

From the preparation, upgrade, and verification, it took me at least about 3.5 hours for two nodes.

Therefore, if you have nore then two nodes under a cluster, then I will prefer to do it not on the same day.

However, I see no issue to upgrade it for two major versions on the same time. 

 

New disk firmware and recovery messages

$
0
0

Recently I installed new NA02 firmware to my X423_HCOBE900A10 disks to fix 888889 and 1239356 bugs. Now I see informational "Recovered error" messages from some disks according to KB articles.

 

Tue Jun 25 08:33:13 +04 [R53-na01-A:disk.ioRecoveredError.retry:info]: Recovered error on disk 3a.00.12: op 0x28:68cb85b0:0008 sector 255 SCSI:recovered error - Disk used internal retry algorithm to obtain data (1 b 95 95) (12) [NETAPP   X423_HCOBE900A10 NA02] S/N [KXH1SAUF]
Tue Jun 25 08:33:13 +04 [R53-na01-A:disk.ioFailed:error]: I/O operation failed despite several retries.

I know it's normal and disks heals itself with a new firmware but I have some questions.

1. Will these messages stop to appear after some time?

2. Is it possible that weekend aggr scrub task will broke my array if some error thresholds will be reached?

Copy CIFS snapshots to a non-netapp system

$
0
0

Hi all,

 

We have an old FAS3140 that we need to decommission, it's out of support. It contains about 10TB Snapvault snapshots from a CIFS volume that will expire in a year. We want to move these snapshot to another storage, but do not have access to NetApp storage, only other NAS with SMB/NFS.  Is that possible somehow? 

 

Since it's possible to browse the snapshots via CIFS, I could just copy all snapshot folders to a NAS, but that would surely break the snapshots since it will not anylonger be stored on WAFL. Right?

 

Please advise.

 

Thank you!

 

 

 

 

 

Is it possible to copy 

Re: Copy CIFS snapshots to a non-netapp system

$
0
0

Hi

 

if you have a very good dedup engine on the new platform you can copy the \vol\vol_name\.snapshot folder over cifs (need to enable the folder to be visible). however if you don't have dedup. each file will be copy and consume the space as the number of snapshots you have.

 

a more netapp way of solution i can offer is to purchase a year license of  an ONTAP select VMWare appliance, provision it on your new environment and snapmirror the volumes to it.

 

Gidi

Re: Copy CIFS snapshots to a non-netapp system

$
0
0

Hi and thank you so much for your answer.

 

Yeah, we are using Windows Server 2016 with ReFs which do not support dedup. Boss said, just unplug it and store it. Restore times will be longer... Smiley Very Happy 

 

Fine by me. Smiley Happy 


Re: ontap upgrade two major version on the same time

$
0
0

If you have firewalls between some of your lifs, be sure to verify your routes.    Ontap 9.2 had major changes including removal of fastpath, this can really break things.    This really burned me on my upgrade  to 9.3.

SLOW Cifs performance after snapmirror break and subsequent resync

$
0
0

We have been seeing an issue which is affecting our systems only when we break our snapmirrors for DR purposes and then fail back afterward. After the DR operation concludes and we are resynced to our original relationship, CIFS performance is much slower. We particularly see this when accessing Office files. We suspected this for a while and confirmed it by creating a "clean" test volume, then running the very same data analysis job against the test volume and an existing production volume. The test volume was just as fast as we expected. The production volume was about 6x slower to run the job. We can definitely track the beginning of the issue to the failover/failback. Does anyone have any ideas? FAS2552, OnTap 9.5.

Re: SLOW Cifs performance after snapmirror break and subsequent resync

$
0
0

Hi

 

to answer directly your concern - the only operation i'm aware of to run after snapmirror break is a deswizzling scan, and it can be quite impacting if the deswizzling could have not run after each snapmirror https://kb.netapp.com/app/answers/answer_view/a_id/1003882

 

having said that - this to actually impact just day to day office access sound maybe a bit odd. i can give some idea's to try and follow:

 

are the two systems the same spec? have the same network design?

what level of object do you see the latency in ? (Volume/Lif usually shows client related latency - aggregate is usually down to the local system)

are the client have the same connectivity ? (WAN connection with latency can be very impacting for day to day use like office files)

Are the clients negotiating the same protocols? (SMB version and kerberos vs NTLM - lack of SPN can cause issues.)

 

i think that after isolating common causes. the right thing to have is a few wireshark samples from each scenario

 

Gidi

 

vol move problem

$
0
0

Hi,

I have a 6-node cluster consisting of FAS8020, FAS8040 and AFF300, running cDOT 9.5P4 and totally 3 SVMs,

the latest SVM3 with 2 aggegates from AFF300 disks.

 

Moving any volume from SVM2 to the aggregates of the new SVM3 is possible,

but when I try to move any volume from SVM1, the AFF300 aggregates are not shown as possible destinations.

 

Am I missing something?

Thanks in advance for any tips,

gavan

 

Changing from active/active to active/passive in 7-Mode

$
0
0

Hello,

 

I have a Netapp FAS2240-4 with Release 8.2.4P4 7-Mode. It has two controller boards, one disk shelf an a total of 48 disk (24x2TB in the FAS and 24x1TB in the shelf). It is out of support, so I think I can not upgrade Ontap. I used to have the disks configured like this:

 

2.gif

Now I would like to use a configuration like this:

1.gif

I bootet into maintenance mode and changed the owner of all disks to Node 1. But now Node 2 does not boot, because it does not have any disks. The status of the aggregates is like this:

aggr status -v
Aggr State Status Options
aggr0 online raid_dp, aggr root, diskroot, nosnap=off, raidtype=raid_dp,
64-bit raidsize=11, ignore_inconsistent=off,
snapmirrored=off, resyncsnaptime=60,
fs_size_fixed=off, lost_write_protect=on,
ha_policy=cfo, hybrid_enabled=off,
percent_snapshot_space=0%,
free_space_realloc=off

Volumes: vol0

Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal, block checksums
RAID group /aggr0/plex0/rg1: normal, block checksums

aggr1 offline raid_dp, aggr diskroot, raidtype=raid_dp, raidsize=11,
foreign resyncsnaptime=60, lost_write_protect=off,
64-bit ha_policy=cfo, hybrid_enabled=off,
percent_snapshot_space=0%
Volumes: <none>

Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal, block checksums
RAID group /aggr1/plex0/rg1: normal, block checksums

And the volume:

vol status -v
Volume State Status Options
vol0 online raid_dp, flex root, diskroot, nosnap=off, nosnapdir=off,
64-bit minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off,
create_ucode=on, convert_ucode=on,
maxdirsize=45875, schedsnapname=ordinal,
fs_size_fixed=off, guarantee=volume,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off,
try_first=volume_grow, read_realloc=off,
snapshot_clone_dependency=off,
dlog_hole_reserve=off, nbu_archival_snap=off
Volume UUID: 370ddab1-6719-11e3-8d9a-123478563412
Containing aggregate: 'aggr0'

Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal, block checksums
RAID group /aggr0/plex0/rg1: normal, block checksums

Snapshot autodelete settings for vol0:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
mode=off
Hybrid Cache:

 

What steps am I missing to setup the configuration in the picture above? If possible, I do not want to start from scratch.

 

Thank you,

 

Andreas
Eligibility=read-write

Viewing all 19214 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>