That looks healthy.
you can also try "set d; debug vreport show " it will show an inconsistencies in the system/wafl.
Have you opened a support case? It could be something just wonky with the (ONTAP Select?) deploy.
That looks healthy.
you can also try "set d; debug vreport show " it will show an inconsistencies in the system/wafl.
Have you opened a support case? It could be something just wonky with the (ONTAP Select?) deploy.
I would still present out new datastores from the 27xx and move the VMs cold, It's less disruptive and an easier revert plan.
A colleague of mine is also sugesting this could work for you
too:
https://helpcenter.veeam.com/archive/backup/95/free/migration_job.html
Yes, i am still looking for the answer. Thanks!
Hi guys, my customer is creating an enterprise snaplock aggregate but he wants change to compliace. Is it possible?
How can i do this change?
This is NOT possible.
Delete aggregate, zero disks and start over
Thanks for you help,
switchless is set to false.
I agree and understand exactly what you have explained.
There is no data so i think i have to restart from fresh. so now
I am new to FAS is there a way to reboot and how do i break to a boot menu ?
controller 1 reboot use option 4
controller 2 reboot use option 4 cluster setup join
controller 3 cluster setup join ( do i have to do option 4 on all the rest of the controllers)
controller 4 cluster setup join
controller 5 cluster setup join
controller 6 cluster setup join
reboot will reboot it.
halt will reboot and park it at LOADER>
You will see two "Press control-c" during boot. You will want to press ctrl-c on the second one for the boot menu.
If you haven't joined the nodes, I don't think you need to run opt 4.
It's really odd that they can't see each other. Do you know what version of the RFC the CN1610s are running?
https://mysupport.netapp.com/NOW/download/software/sanswitch/fcp/NetApp/cn1610cm/download.shtml
not sure of the version of the RFC the CN1610s
I will start the process on controller 1 and 2 option 4
on the rest of the controllers I will not
Thanks for the help
As I understand, FabricPool tiering would not transfer deduplicated data over to AWS S3, and it would have to be rehydrated first. My question is:
1. Where rehydrating process would take place, on the storeage cluster?
2. then when retrieving data back to the sotrage cluster from S3, will data be duplicated again?
3. how much performance could be degraded?
Thanks!
Performed
Controller 1 reboot used Option 4 create cluster successfull
Controller 2 reboot used Option 4 Join cluster successfull
Controller 3 reboot used Option 4 Join cluster failed used controller clus1 e0a
Controller 4 reboot used Option 4 Join cluster failed used controller clus 1 e0a
Controller 5 reboot used Option 4 Join cluster failed used controller clus 1 e0a
Controller 6 reboot used Option 4 Join cluster failed used controller clus 1 e0a
Question :: We have Switch A and B :: From all controllers e0a goes to Switch A and e0b goes to Switch B
I see in the error logs that controller 3 e0a IP failed to ping Controller 1 e0b. I think it failed because both switchs are not interconnected. Do we need to interconnect the switch ?
From Controller 3, 4 ,5 and 6 ping successfull to Controller clus 1 e0a and clus 2 e0b
Totally lost here.
Yes, there is a pre-configured ISL between the two CN1610s using the last 4 ports on the switch. You should have 4 short twinax cables to connect them. (Ports 13,14,15,16)
Cabling example:
https://library.netapp.com/ecmdocs/ECMP1197116/html/GUID-3F4B53E3-C5D4-40D8-ABF3-4FD8AFD6D10D.html
and ISL:
https://library.netapp.com/ecmdocs/ECMP1197116/html/GUID-29E2812B-1357-4FB8-A117-EDB72DB123CF.html
Give this a read over: https://www.netapp.com/us/media/tr-4598.pdf
" Storage efficiencies such as compression, deduplication, and compaction are preserved when moving data to the capacity tier, reducing object storage and transport costs. Aggregate inline deduplication is supported on the performance tier, but associated storage efficiencies are not carried over to objects stored on the cloud tier."
So yes, there is a lot preserved, but when the data is moved off the "local" aggr, you lose the aggr level dedupe.
Hi, it's a FAS8060 with 9.1P15 and no, I've checked on hardware universe, we cannot go beyond 400TB (360TB) ever with 9.5.
Thank you JGPSHNTAP!
I think we got it. Will make those connections now and will update.
Thanks for the help
This was the part I don't get.
So, are you saying that I will lose the aggregate level inline deup, but compression and compaction will be preserved? If yes, then again, as I asked earlier, data will be get rehydrated first then transferred to Object storage, correct?
on the other side:
> Storage efficiencies such as compression, deduplication, and compaction are preserved when moving data to the capacity tier,
This section seems to me that all efficiencies (including deduplication? if yes then contradictting above!) will be preseved, and moved to the storage. So, I am confused.
Just the aggr level dedup will be lost. And like with any other effecicy feature, minute preformace impact.
All good here. as sson ISL connections are made took me 10 min to finish up the cluster setup.
Appreciate your help.
Thanks
I have an Ontap Select system that is up and running. The ESX system is being moved to another location and thus the naming of the virtual machines is being changed. Is it possible to rename the deploy and select system? Since the Select is running Ontap I believe I can rename the cluster but will this break the deploy system?
I've looked for documentation and haven't been successful about finding whether or not this is possible and if so how to move forward. If anyone has any information or know of documentation on how to do this I would appreciate the help.
Thanks,
Travis
No problem!