Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19217 articles
Browse latest View live

OnTap 9.5 BGP

$
0
0

Anyone have success configuring BGP in OnTap 9.5 yet.  Read through the documentation several times, and successfully created peer groups, but am just missing some key to it.


7MTT Cifs Migration Keeping IP and Identity

$
0
0

We are trying to migrate several cifs filers & vfilers to cmode and in our testing have come across some problems when trying to follow the transition guide.

Though not mentioned, you can't create a cifs server on the new SVM without a lif so you have to borrow a temp IP and create a temp lif. Then you can create a cifs server with a temp identity.

If you follow the ONTAP 8 procedure, the guide doesn't mention how to reconfigure the cifs server on the 7mode system without terminating cifs then re-running cifs setup which is disruptive to clients.

We tried to follow the procedure for ONTAP 9.0 or later and failed to successfully run the vserver cifs modify command to change the cifs netbios name to the 'real' name. The error we got indicated our AD domain id didn't have rights to rename the  cifs identity but it can add and remove entries.

::> vserver cifs modify -vserver testsvm -cifs-server testsvm 

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with sufficient privileges to add
computers to the "CN=Computers" container within the "xxxxxxx" domain.
Enter the user name: xxxxxxx
Enter the password:xxxxxxxxxx

Error: Machine account creation procedure failed
  [   721] Loaded the preliminary configuration.
  [   822] Successfully connected to ip aa.bb.cc.dd, port 88 using TCP
  [  1031] Successfully connected to ip aa.bb.cc.dd, port 389 using TCP
**[  1140] FAILURE: Could not rename existing account
**         'CN=testsvmtempid,CN=Computers,DC=xxx,DC=xxxxx,DC=com'
**         to 'cn= testsvm,CN=Computers,dc=XXX,dc=XXXXX,dc=COM':
**         Insufficient access
Error: command failed: Failed to create the Active Directory machine account "TESTSVM". Reason: LDAP Error: The user has insufficient access rights.

Our domain admins weren't aware of any permissions that could be applied to our ID to allow the modify to run. We ended-up having the domain admin remove the old and temporary vfiler/SVM entries in the domain, deleting the SVM's cifs configuration then recreating it with the 'real' identity. This resulted in losing the cifs shares from the SVM so they had to be recreated by hand. 

What permissions do we need to be able to run the vserver cifs modify so the migrations can be done with minimal disruption? Due to security concerns, we would not be allowed to use a domain admin account so we're looking to understand what the minimum set of permissions would be. 

Re: 7MTT Cifs Migration Keeping IP and Identity

Re: discrepancy btw volume size and transferred size by SnapMirror

$
0
0

What is the size of the volume on your source system? Do you use deduplication and compression?

The difirence could be because of data efficiency technologies. SnapMirror preserves storage efficiency on the source and destination volumes, with one exception, when postprocess data compression is enabled on the destination. In that case, all storage efficiencyis lost on the destination

FAS8020 supporting IBM LTO8 HH drives since the ONTAP upgrade

$
0
0

Looking for help and advice for the following issue since the ONTAP upgrade

FAS8020 supporting IBM LTO8 HH drives since the ONTAP upgrade please?
Does  ONTAP v9 onwards, support IBMLTO8 HH Drives.
I have just connected the new tape library with 6 drives and found this nonqualified device listed under supported-status...
netapp-c2-01::> storage tape show-supported-status
Tape Drives Supported Support Status "IBM ULT3580-HH8 " false Nonqualified
Our current version is:- NetApp Release 9.3P10
 
Can anyone please advise

Re: FAS8020 supporting IBM LTO8 HH drives since the ONTAP upgrade

Re: FAS8020 supporting IBM LTO8 HH drives since the ONTAP upgrade

$
0
0

Gidi

 

Thanks for the help great thank you

Re: Ontap 9.x root-data-data partitioning discussion

$
0
0

Hi Arne,

 

Did you manage to solve this?

I am in a similar situation. I am only adding 2 drives to a shelf which currently only has 12 drives.  


Re: It appears "volume move" works okay, I may have a problem with out of space conditions

$
0
0

So several issues came to light through this excersize.

Our last vol move of a 100TB volume, on a tight aggregate, appeared to work;  the volume was copied to a new aggregate and transfered.  No errors were reported.

However, immediately on Monday after cutover (auto-triggered), users started reporting their files had reverted to an earlier version;  The date was the date of the original volume move start operation.

There are several issues with troubleshooting this issue:

  1. This is a restricted site, so no auto-supports go to NetApp
  2. My server which triggered weekly auto-supports to email, had been turned-down (new one not up yet)
  3. Log limit on the array's mroot removed original logs which applied to the trigger operation
  4. Do not have a Syslog server to send "ALL" logs to;  not sure that can be done either
  5. Volume Move does not add to the "volume recovery-queue" so I cannot undelete the original source

I ran a test on a small volume, populated with 0 byte files in nested directories.  I watched the snapshots and updates "volume move" made and they worked fine.  The only difference between my troublesome moves and the test, was that the trouble was on volumes with dedicated aggregates, with minimal free space.  (Don't ask why...)  The same with the other two large volumes which had problems.

All of the smaller volumes which I had moved appear to have worked okay.

 

So my conclusion is that this happened because of the lack of free space, for one reason or another, but of course I can't prove it.

 

I would like to request the following from NetApp:

  • Please add an option to volume move to keep the original volume around if there is an issue (I forgot, in my case my backup was to a smaller model array, which has a smaller volume size.)

If anyone knows how to setup a network syslog type server which can keep all of the Ontap logs, please let me know.

 

In the final analysis, I had to ask that the case be closed, because I could not provide logs proving or disproving user allegations.  I believe that volume move will work correctly, so long as there is enough space to do what it needs.  Of course, this is all conjecture on my part, and I apologize if it is wrong.  In the meantime, I've had to revert to SM, which appears to be a little slower than vol move.

 

TasP

Re: Continuing with my volume move issues...

$
0
0

I believe I can use 'volume move' and verify data contents by using XCP.

 

My though is to start a volume move operation with a manual trigger;  then take a daily snap, and snapvault to my secondary;  I can create a new snap on my secondary to keep the data.

 

However, I thought that I could also run an XCP scan to capture the file state prior to triggering the cut-over, and then after the cutover, for comparison purposes.  I am having a little trouble coming up with the xcp syntax;  perhaps someone here  can help.  My thoughts are:

./xcp scan -newid XXX  ontap:/export/path < prior to the manual cutover

./xcp sync dry-run -id XXX                                   < after the manual cutover

I ran a small test, and this seems to be the closest.  I also tried the 'xcp scan stats -l' but I can't figure out how to quick compare.  When I sent the output to a text file (-l), and reran later, I had a whole lot of diff's.  Not sure if that would be helpful.

Re: Ontap 9.x root-data-data partitioning discussion

$
0
0

I have used the solution described twice. It works without problems. Keep In mind that the limit is 48 drives. You can use the existing spare drive to partition the new drives as I did in my example. 

 

Best of luck to you.

 

With regards

Arne

Re: It appears "volume move" will cause massive data loss on large volume

$
0
0

Sorry it was a mistake to hit me too.

 

br

 

Zoltan

Re: Ontap 9.x root-data-data partitioning discussion

Commit a file to WORM via api

$
0
0

Hello,
in our clusters we actually do commit to WORM state changing attributes via NFS. I think this is the preferred way and the guide "Ontap 9 archive and compliance using snaplock technology power guide" confirm my thoughts.

This KB shows the same can be done via API

Do anyone of us have some experience using API? Are there any disavantages to consider (security, performance etc.)?

Thank you

Lorenzo 


Re: It appears "volume move" works okay, I may have a problem with out of space condition

$
0
0

Hello  ,

you are pointing at "lack of free space": can I ask you if you are referring to the source or destination aggregate?  AFAIK "vol move" will not start if there is not enough space on the destination aggregate.
Of course if the vol move takes a long time, you can run out of disk space if something is writing too much data on your volume...

Can I ask you how the option "space-guarantee" is set on all the volumes that share the involved aggregates?

Cheers

Lorenzo 


Re: It appears "volume move" works okay, I may have a problem with out of space condition

$
0
0

My guarantee is none.  Available space was at 5% on the source.  "ergo the move".

 

Re: 7-mtt and Ontap 9.2

$
0
0

You have to inlcude the patch version.

Example:

"clustered.ontap.versions.supported= 9.3P1"

ONTAP 9.5

$
0
0

What are the new features in ontap 9.5 and changes made in regards to syncronous mirroring?

Re: ONTAP 9.4 2750 controllers :: New Switched cluster setup failing.

$
0
0

All good here. as sson ISL connections are made took me 10 min to finish up the cluster setup.

 

Appreciate your help.

 

Thanks  

NFS/CIFS Encryption

$
0
0

In terms of end to end encryption over NFS/CIFS, I know there can have NetApp Volume Encryption which will happen on volumes. What about encryptions from VM clients and to the NetApp storage?

 

In NFS Datastore cases, we are using v3, and not using Kerboros( I know Kerbors can support AES). We also use NFS/CIFS share. So, what kind of encryptions suppored here if any, and how can they work out?

 

Thanks!

Viewing all 19217 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>