Hello,
you can try this workaround :
disk assign <diskid>
priv set diag
labelmaint isolate <diskid>
label wipe <diskid>
label wipev1 <diskid>
label makespare <diskid>
labelmaint unisolate
priv set
Be carefull about disk number
See in this post :
Hello,
you can try this workaround :
disk assign <diskid>
priv set diag
labelmaint isolate <diskid>
label wipe <diskid>
label wipev1 <diskid>
label makespare <diskid>
labelmaint unisolate
priv set
Be carefull about disk number
See in this post :
netapp 9.3 command collection. may can help
do you have a link to download the 9.3 version. much appreciated.
VMware is easy (usually). Just setup new datastores and migrate via storage vmotion.
Is it possible to do it without VMotion?
I perfer to use the 7MTT when I can for migrations when I can.
I have not used it yet but looks interesting.
Thanks for your comments.
I would think you could use the 7MTT to move the datastores as a whole, but downtime for the VMs on the datastore(s) moving will be required.
Why not use sVmotion?
Hello everyone,
I have some problems by deploying a 2-Node Cluster at the "POST DEPLOY SETUP" task.
All tasks were marked as sucessfull so far and I can also connect to the cluster- and node-shell(s)
Commands like "cluster show", "net int show", "stor disk show" only tell me that everything is up and running. No errors at all.
Regarding to Netapp: If the cluster-deployment is not completely done, the default login will be "admin / changeme123".
As this exact login is working in my situation, I guess that the cluster is still not finally deployed .
Cluster "POST DEPLOY SETUP" keeps stuck at "ClusterMgmtIpPingable" for over two hours now.
From the perspective of VMware, everything looks fine also.
As I can find absolutelly nothing regarding to this kind of behaviour/situation, I hope to get some help from the Netapp community.
Best regards and thank you in advance.
What's the cluster ring look like ?
set -priv adv
cluster ring show
And what's the status of HA?
Hi,
I have 3 nodes 2750 each with 2 controller. I have 2 netapp switch.
e0a from each controller is connected to Switch A 6 connections and e0b from each controller to Switch B. All connections are solid and on every controller e0a and e0b are populated with internal IP's.
on my first Node 1 controller 1 i created a cluster and Node 1 controller 2 joined without any issues.
Node 2 and 3 controllers 1 and 2 ran cluster setup and join and provided { e0a IP Node 1 controller 1 } and they all failed and this is happening on all 4 controllers. I also tried to do it from GUI Guided setup and failed.
Error stated that node are not rechable, not able to ping.
From Node 2 and 3 from every controller i am able to ping clus1 and clus2 default interconnect lifs cluster e0a and e0b and e0a and e0b.
Need some help what am i missing ??
Any help will be appreciated.
That sounds like it is cabled correctly.
Are the CN1610s configured correctly in cluding the ISL?
All link lights showing good?
And are you using all compatiable Cluster cables or SFPs? (see hwu.netapp.com for a full list)
its all netapp hw
as I satted i see all the e0a and e0b's are up and have IP.
I am not sure where its goofed up.
Node 1 controller 1 and 2 are clusterred
Node 2 controller 1 and 2 successfull ping to Node 1 controller 1 and 2 clus1 and clus 2 e0a, e0b,
Node 3 controller 1 and 2 successfull ping to Node 1 controller 1 and 2 clus1 and clus 2 e0a, e0b,
I am not able to ping from Node 1 controller 1 and 2 to. It asked me to select the node
Node 2 controller 1 and 2 e0a, e0b,
Node 3 controller 1 and 2 e0a, e0b,
How can i break or remove or un cluster Node 1 controller 1 and 2 and start fresh or anu other hint.
Thanks
Check this setting also "network options switchless-cluster show"
It should be set to false, if not modity it and try again.
As far as starting over... If there's no data on them you could just option 4 in the boot menu and start over.
Create the cluster using cluster setup wizard.
After you create the cluster on 01, join 02 using 01's IP address. Join 03 using an 169.x address off 01, do the same for 04, and then 05&06.
Just for clarifaction of terms; The terms node and controller tend to be used interchangeable i've noticed.
You have 3 HA pairs and 6 controllers/nodes. When the cluster is fully setup you will have nodes 01 & 02 then add 03 & 04 and lastly 05 & 06.
I "HAD" the same problem. The issue is with ESXi hosts not the Netapp cluster. what you have to do is reboot your
offending esxi hosts one at a time. I've forgotten the technical specifics but some type of FCP communication between host(s) and switch is locking out logins.
Hi
You can try to using AWA (Automated Working Set Analyser) to sizing how many cache capacity you need.
Max Projected Cache Size
The size at which the SSD cache would hold every eligible data block that was requested from disk during the AWA run. Note that this does not guarantee a hit for all future I/O operations, because they might request data that is not in the cache. However, if the workload during the AWA run was a typical one, and if your budget allows for it, this would be an ideal size for your Flash Pool cache.
You can login to https://mysupport.netapp.com, and you will get everything you need the most.
I attatched a screenshot showing the output generated by the commands.
Everything looks fine.
By the way => Nothing changed over night. Still stuck at "ClusterMgmtIpPingable".
I would think you could use the 7MTT to move the datastores as a whole, but downtime for the VMs on the datastore(s) moving will be required.
You mean, when we have old and new Netapps in place then we can migrate the data from old to new using 7MTT without distrupting the users??
I mean,
1. Just turn on new Netapp with version 9.4 or something
2. And then using 7MTT over Network. ()
3. Migrate the datastores from Datastore (Netapp 8.1) to Datastore (Netapp 9.4 )
4. And then turn off the old netapp.
5. Start using the new one? (For this we keep the hostnames of the Netapp nodes same as the old ones after migration)
Like said before, almost all the Netapp data is accessed using NFS.
Is this correct or am I missing something.
Why not use sVmotion?
we do not have it. We use free ESXi
Thanks.
Regards.
Hello,
we were adding some disks to existing aggregate and got this error message:
Hello, Lorenzo
You'rez right ... often you ritch first the max capacity before the max disk number
If your limit is 400 To, you cannot grow up your AGR over this limit
Hi Cedric,
thank you for the reply.
Better to keep in mind about that when planning storage design. It's quite frustrating to add full loaded shelves and cannot use all the drives on them ;-)
Lorenzo
Sure,
But you can create another AGGR ...
You just need carefull about the disk nimber in your RG before
I'm Sorry for you
Bur just a last pair of questions :
- DataOntap version
- System FAS xxx
Maybee with an Dataontap update you can have more capacity supported