Quantcast
Channel: All ONTAP Discussions posts
Viewing all 19238 articles
Browse latest View live

mutiple routes with same destination and metric

$
0
0

ONTAP 9.1RC1 on 6-node cluster

 

HI,

 

I just notice on my cluster, an SVM has the following routes:

 

ntap01::> network route show
Vserver             Destination     Gateway         Metric
------------------- --------------- --------------- ------
vs1
                    0.0.0.0/0       172.25.0.1      20
                    0.0.0.0/0       192.168.87.1    20

 

The SVM has LIFs on these two subnets (one for data and the other for backup traffic). I am wondering the above routes pose any risks espeicially they both have same metric value. Is it smart enough to know that if the LIF is on 172.25.0.0 subnet, use the 1st route for outgoing traffic and if the LIF is on 192.168.87.0 subnet, use the 2nd route?

 

Thanks,

 


How to confirm if NIS is working on netapp filer

$
0
0

1)In the below ypwhich o/p is "Not bound to any nis server" does this confirm nis not in use

 

2) How to confirm in current environment what are the services nis is providing (whether its hosts,netgroup,groups,password services)

raman1*> options nis

nis.domainname               bpxa

nis.enable                   on

nis.group_update.enable      on

nis.group_update_schedule    6,7,8,9,10,11,12,13,14,15,16,17,18,24

nis.netgroup.domain_search.enable on

nis.netgroup.legacy_nisdomain_search.enable on

nis.servers                  161.99.65.8,161.99.65.10

nis.slave.enable             off

ramancss003*> ypwhich

Not bound to any NIS server.

ramancss003*>

 

raman4> options nis

nis.domainname               bpxa

nis.enable                   on

nis.group_update.enable      on

nis.group_update_schedule    6,7,8,9,10,11,12,13,14,15,16,17,18,24

nis.netgroup.domain_search.enable on

nis.netgroup.legacy_nisdomain_search.enable on

nis.servers                  161.99.65.8,161.99.65.10

nis.slave.enable             off

ramancss004*> ypwhich

Not bound to any NIS server.

ramancss004*>

Re: Node disk fragmentation

Re: Node disk fragmentation

$
0
0

I don't think this is it. We do really have fragmentation and we do see it on most of the clusters regardless if flash pool is used or not and we are running OCPM 7.1 at the moment. 

 

In statis output I see bad RAID statistics and 2:1 cpreads:writes

 

                     RAID Statistics (per second)
   6735.19 xors                              0.00 long dispatches [0]
      0.00 long consumed [0]                 0.00 long consumed hipri [0]
      0.00 long low priority [0]             0.00 long high priority [0]
     99.89 long monitor tics [0]             0.02 long monitor clears [0]
      0.00 long dispatches [1]               0.00 long consumed [1]
      0.00 long consumed hipri [1]           0.00 long low priority [1]
     99.89 long high priority [1]           99.89 long monitor tics [1]
      0.02 long monitor clears [1]             18 max batch
      4.87 blocked mode xor               1142.43 timed mode xor
      6.54 fast adjustments                  7.73 slow adjustments
         0 avg batch start                      0 avg stripe/msec
      0.00 checksum dispatches               0.00 checksum consumed
     56.29 tetrises written                  0.00 master tetrises
      0.00 slave tetrises                 3384.92 stripes written
   3347.99 partial stripes                  36.93 full stripes
  28699.47 blocks written                26734.05 blocks read
     20.66 1 blocks per stripe size 1      152.46 1 blocks per stripe size 20
    179.10 2 blocks per stripe size 20     201.02 3 blocks per stripe size 20
    218.03 4 blocks per stripe size 20     240.61 5 blocks per stripe size 20
    251.40 6 blocks per stripe size 20     260.87 7 blocks per stripe size 20
    261.86 8 blocks per stripe size 20     248.86 9 blocks per stripe size 20
    229.94 10 blocks per stripe size 20    211.66 11 blocks per stripe size 20
    193.01 12 blocks per stripe size 20    168.48 13 blocks per stripe size 20
    144.02 14 blocks per stripe size 20    123.53 15 blocks per stripe size 20
     99.40 16 blocks per stripe size 20     76.79 17 blocks per stripe size 20
     55.66 18 blocks per stripe size 20     31.30 19 blocks per stripe size 20
     16.27 20 blocks per stripe size 20


disk             ut%  xfers  ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs
/AGGR_ROOT_03/plex0/rg0:
0a.30.0            2   2.70    0.35   1.51   639   2.24   9.81   779   0.11   5.91  1001   0.00   ....     .   0.00   ....     .
0a.31.0            2   3.09    0.35   1.51   687   2.73   8.40   919   0.01   1.00  5732   0.00   ....     .   0.00   ....     .
4b.32.0            2   4.46    2.93   1.08  2835   1.04  19.88   377   0.48   1.00  5714   0.00   ....     .   0.00   ....     .
/AGGR_DATA_03/plex0/rg0:
4b.32.1           22  40.03    0.00   ....     .  18.82  59.68   151  21.22  53.81   110   0.00   ....     .   0.00   ....     .
4b.31.1           23  40.86    0.00   ....     .  19.66  57.20   163  21.20  53.76   118   0.00   ....     .   0.00   ....     .
0a.30.1           78 203.91  152.31   3.84  3550  16.00  29.72   615  35.59  10.52  1027   0.00   ....     .   0.00   ....     .
4b.32.2           75 205.12  154.42   3.83  2950  15.52  31.39   554  35.18  11.00   933   0.00   ....     .   0.00   ....     .
0a.31.2           75 206.01  155.64   3.78  3100  15.55  30.66   557  34.82  11.26   848   0.00   ....     .   0.00   ....     .
0a.30.2           79 206.50  155.46   3.83  3552  15.42  31.79   603  35.62  10.92  1046   0.00   ....     .   0.00   ....     .
4b.32.3           75 206.53  155.80   3.80  3080  15.90  31.29   571  34.83  11.67   840   0.00   ....     .   0.00   ....     .
4b.31.3           74 203.55  152.00   3.92  2812  15.46  31.63   564  36.09  10.82   934   0.00   ....     .   0.00   ....     .
0a.30.3           78 205.15  154.34   3.72  3615  15.52  30.74   590  35.29  11.00   951   0.00   ....     .   0.00   ....     .
4b.32.4           75 205.29  153.46   3.89  2993  15.53  30.75   589  36.31  10.81   942   0.00   ....     .   0.00   ....     .
0a.31.4           75 203.92  153.00   3.84  3040  15.36  30.40   572  35.55  10.70   955   0.00   ....     .   0.00   ....     .
0a.30.4           79 203.39  152.56   3.91  3519  15.37  29.95   631  35.45  11.01   991   0.00   ....     .   0.00   ....     .
4b.32.5           72 198.69  147.49   3.90  2796  15.52  31.59   536  35.68  11.03   866   0.00   ....     .   0.00   ....     .
4b.31.5           75 200.52  150.44   3.94  2906  15.40  30.92   554  34.68  10.96   919   0.00   ....     .   0.00   ....     .
0a.30.5           77 200.43  148.21   3.93  3440  15.76  30.49   598  36.46  10.91   942   0.00   ....     .   0.00   ....     .
4b.32.6           76 204.50  153.70   3.89  3019  15.56  31.83   612  35.24  11.04   939   0.00   ....     .   0.00   ....     .
0a.31.6           76 211.24  160.19   3.70  3059  15.48  31.64   559  35.57  11.06   910   0.00   ....     .   0.00   ....     .
0a.30.6           78 206.84  155.03   3.79  3545  15.66  30.25   601  36.15  10.67  1036   0.00   ....     .   0.00   ....     .
4b.32.7           75 207.19  156.46   3.81  2985  15.43  31.30   557  35.30  11.15   859   0.00   ....     .   0.00   ....     .
4b.31.7           75 207.70  157.63   3.88  2885  15.35  30.69   563  34.72  11.14   908   0.00   ....     .   0.00   ....     .
0a.30.7           78 201.06  149.99   3.91  3501  15.61  31.04   588  35.46  11.24   963   0.00   ....     .   0.00   ....     .
4b.32.8           74 204.21  153.27   3.76  3029  15.33  31.27   579  35.61  10.80   889   0.00   ....     .   0.00   ....     .
/AGGR_DATA_03/plex0/rg1:
0a.31.8           21  38.83    0.00   ....     .  18.05  61.33   140  20.78  54.30   111   0.00   ....     .   0.00   ....     .
0a.30.8           22  39.07    0.00   ....     .  18.31  60.49   147  20.76  54.29   113   0.00   ....     .   0.00   ....     .
4b.32.9           76 209.23  157.00   3.80  3037  15.64  30.20   580  36.59  10.98   894   0.00   ....     .   0.00   ....     .
4b.31.9           74 201.50  149.04   4.03  2753  15.64  30.40   567  36.82  10.92   927   0.00   ....     .   0.00   ....     .
0a.30.9           80 213.00  160.05   3.71  3690  15.58  30.78   615  37.37  10.65  1030   0.00   ....     .   0.00   ....     .
4b.32.10          74 210.20  158.12   3.87  2858  15.58  30.31   578  36.50  10.90   902   0.00   ....     .   0.00   ....     .
0a.31.10          75 211.73  160.03   3.87  2896  15.54  29.41   571  36.17  11.02   916   0.00   ....     .   0.00   ....     .
0a.30.10          79 207.73  155.23   3.94  3323  15.55  31.23   576  36.94  10.83  1031   0.00   ....     .   0.00   ....     .
4b.32.11          74 199.78  146.79   3.89  2940  15.46  30.21   560  37.53  10.88   918   0.00   ....     .   0.00   ....     .
4b.31.11          75 215.56  162.85   3.77  2936  15.42  30.67   557  37.29  10.78   967   0.00   ....     .   0.00   ....     .
0a.30.11          80 209.90  158.23   3.80  3627  15.53  30.01   628  36.14  11.02  1011   0.00   ....     .   0.00   ....     .
4b.32.12          74 204.92  152.36   3.73  3067  15.51  30.20   575  37.05  10.85   902   0.00   ....     .   0.00   ....     .
0a.31.12          76 210.26  157.36   3.98  2930  15.61  30.77   581  37.28  10.95   916   0.00   ....     .   0.00   ....     .
0a.30.12          79 208.94  156.94   3.77  3383  15.32  30.44   570  36.68  10.86  1061   0.00   ....     .   0.00   ....     .
4b.32.13          73 200.18  148.01   3.86  2998  15.37  30.66   586  36.80  10.73   920   0.00   ....     .   0.00   ....     .
4b.31.13          75 206.07  153.50   3.93  3050  15.52  30.68   591  37.05  10.68   952   0.00   ....     .   0.00   ....     .
0a.30.13          79 210.68  159.01   3.91  3587  15.35  30.28   628  36.32  11.17  1010   0.00   ....     .   0.00   ....     .
4b.32.14          75 215.53  162.97   3.70  3033  15.47  30.04   596  37.09  10.42   955   0.00   ....     .   0.00   ....     .
0a.31.14          75 208.71  156.31   3.91  2933  15.58  30.40   556  36.82  10.67   968   0.00   ....     .   0.00   ....     .
0a.30.14          78 208.60  156.47   3.87  3331  15.38  31.13   592  36.75  10.86   975   0.00   ....     .   0.00   ....     .
4b.32.15          76 213.41  160.78   3.86  2947  15.53  29.74   590  37.10  10.62   967   0.00   ....     .   0.00   ....     .
4b.31.15          76 213.52  161.41   3.66  3131  15.46  30.87   592  36.65  10.94   933   0.00   ....     .   0.00   ....     .
/AGGR_DATA_03/plex0/rg2:
4b.32.16          22  39.13    0.00   ....     .  18.42  61.67   147  20.71  55.77   112   0.00   ....     .   0.00   ....     .
0a.30.15          22  39.38    0.00   ....     .  18.70  60.80   145  20.69  55.71   109   0.00   ....     .   0.00   ....     .
0a.31.16          74 206.72  155.79   3.81  2759  15.51  31.00   508  35.42  11.25   861   0.00   ....     .   0.00   ....     .
4b.32.17          75 203.32  153.05   3.80  2983  15.56  30.48   539  34.70  11.57   817   0.00   ....     .   0.00   ....     .
0a.30.22          79 207.19  157.11   3.77  3437  15.58  31.52   551  34.50  11.26   951   0.00   ....     .   0.00   ....     .
4b.31.17          75 206.84  156.15   3.78  3003  15.60  31.31   533  35.09  11.21   885   0.00   ....     .   0.00   ....     .
4b.32.18          74 206.46  156.73   3.88  2796  15.35  31.23   541  34.38  11.36   860   0.00   ....     .   0.00   ....     .
0a.30.17          79 203.93  153.23   3.78  3518  15.52  30.45   588  35.17  11.00   991   0.00   ....     .   0.00   ....     .
0a.31.18          75 209.19  158.20   3.84  2883  15.51  31.05   546  35.48  11.20   904   0.00   ....     .   0.00   ....     .
4b.32.19          75 206.44  156.06   3.64  3185  15.38  31.45   535  35.00  11.43   880   0.00   ....     .   0.00   ....     .
0a.30.18          78 212.69  161.99   3.68  3373  15.37  31.37   558  35.33  11.15  1001   0.00   ....     .   0.00   ....     .
4b.31.19          75 205.36  153.75   3.85  2967  15.92  31.47   541  35.69  11.41   907   0.00   ....     .   0.00   ....     .
4b.32.20          74 211.59  161.04   3.71  2857  15.64  30.92   537  34.91  11.34   873   0.00   ....     .   0.00   ....     .
0a.30.19          78 206.73  155.74   3.69  3505  15.71  30.55   569  35.28  11.64   926   0.00   ....     .   0.00   ....     .
0a.31.20          75 210.89  160.87   3.76  2957  15.26  30.84   556  34.76  11.25   897   0.00   ....     .   0.00   ....     .
4b.32.21          74 202.38  151.96   3.88  2891  15.68  30.79   526  34.74  11.45   840   0.00   ....     .   0.00   ....     .
0a.30.20          79 203.98  153.04   3.93  3474  15.76  30.84   575  35.17  11.58   932   0.00   ....     .   0.00   ....     .
4b.31.21          75 211.86  161.03   3.81  2907  15.57  31.63   565  35.26  11.12   909   0.00   ....     .   0.00   ....     .
4b.32.22          75 205.53  154.98   3.81  3016  15.56  30.40   560  34.98  11.16   909   0.00   ....     .   0.00   ....     .
0a.30.21          78 204.44  153.64   3.89  3430  15.70  31.15   557  35.10  11.65   936   0.00   ....     .   0.00   ....     .
0a.31.22          75 210.42  159.52   3.78  2894  15.70  30.81   544  35.19  11.51   849   0.00   ....     .   0.00   ....     .
4b.32.23          74 202.89  153.08   3.66  3028  15.18  29.69   589  34.63  11.35   857   0.00   ....     .   0.00   ....     .

 

 

Re: Fileserver Migration with robocopy

$
0
0

 

Hey guys,

 

well I just did some more tests and found out something. I'm not sure whose "fault" it really is - but I'd say there is a good chance it might be on NetApp's side.

 

Clearly something changed in Robocopy. Which is easy to prove since the copy works just fine with older versions of it but not with more recent ones. Now the base-problem however might be on NetApp's side - who seem not to imitate an actual Windows Fileserver quite closely enough. Which again is easilsy proven - because the problem does not occur when copying between (or to) Windows fileservers. So maybe newer Robocopy version rely on a feature not present or broken in NetApp's CIFS implementation?

 

So here's what I did. I copied a simple folder with a few PDFs with Robocopy. I copied from a cDOT Filer to a 7mode filer here, but copying the same folder from a "real" Windows fileshare shows the exact same behavior. 

 

I did an initial robocopy /MIR /COPYALL - which as expected creates a duplicate ot the folder and files. Every subsequent run of /MIR /COPYALL shows the problem we have - all files are marked as "Modified" although they are not. However as established, the files do not get copied.

 

So another run with the /debug parameter:

 

 

-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robustes Dateikopieren fr Windows                              
-------------------------------------------------------------------------------
  Gestartet: Freitag, 2. Juni 2017 12:18:40
OSVersion : 6.3 (9600) 25800306
   Quelle : \\?\UNC\SOURCE-FILER\testfolder\
     Root : "\\SOURCE-FILER\fpm\"
  VolName : "fpm"
   Serial : 80000415
  MaxName : 255
   FSFlag : 000400CF
   FSType : "NTFS"
   IsNTFS : 1
     Ziel : \\?\UNC\TARGET-FILER\testfolder\
     Root : "\\TARGET-FILER\data\"
  VolName : ""
   Serial : 3B02D496
  MaxName : 255
   FSFlag : 0004004F
   FSType : "NTFS"
   IsNTFS : 1
    Fudge : 0
    Dateien : *.*
  Optionen: *.* /S /E /COPYALL /PURGE /MIR /DEBUG /R:1000000 /W:30 
------------------------------------------------------------------------------
\\?\UNC\SOURCE-FILER\testfolder\tr-3197_TEchnicalOverviewofSnapDrive.pdf: Create: 1cd4ee6369beb3e LastA: 1d2db899501cfd8 LastWr: 1cd49f3a4f22f93 Change: 1d093933059611c
\\?\UNC\SOURCE-FILER\testfolder\tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf: Create: 1cd4ee636a428c6 LastA: 1d2db8995068ac8 LastWr: 1cd486bafe9d4dd Change: 1d093933059af46
\\?\UNC\SOURCE-FILER\testfolder\tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf: Create: 1cd4ee636a895aa LastA: 1d2db89953c1968 LastWr: 1cd49f39598b963 Change: 1d093933059fd5c
\\?\UNC\SOURCE-FILER\testfolder\tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf: Create: 1cd4ee636b19682 LastA: 1d2db8995601bf6 LastWr: 1cd4a3269d656a4 Change: 1d09393305a4b86
\\?\UNC\SOURCE-FILER\testfolder\tr-3828_SnapDrive62WindowsBestPractices.pdf: Create: 1cd4ee636ba7040 LastA: 1d2db89956d3b74 LastWr: 1cd49f3b27efb4b Change: 1d09393305a999c
\\?\UNC\SOURCE-FILER\testfolder\tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf: Create: 1cd4ee636c37136 LastA: 1d2db899590efc4 LastWr: 1cd496422779fad Change: 1d09393305ae7b2
\\?\UNC\SOURCE-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db8994f3c65e LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44
\\?\UNC\TARGET-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db89890de0ea LastWr: 1ce4bf5dbc19250 Change: 1d2db8994f02cce
\\?\UNC\TARGET-FILER\testfolder\tr-3197_TEchnicalOverviewofSnapDrive.pdf: Create: 1cd4ee6369beb3e LastA: 1d2db898929cd32 LastWr: 1cd49f3a4f22f93 Change: 1d2db8994fd4c7e
\\?\UNC\TARGET-FILER\testfolder\tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf: Create: 1cd4ee636a428c6 LastA: 1d2db898931bc72 LastWr: 1cd486bafe9d4dd Change: 1d2db8995022e74
\\?\UNC\TARGET-FILER\testfolder\tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf: Create: 1cd4ee636a895aa LastA: 1d2db898967722c LastWr: 1cd49f39598b963 Change: 1d2db89953796ea
\\?\UNC\TARGET-FILER\testfolder\tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf: Create: 1cd4ee636b19682 LastA: 1d2db89898e5aea LastWr: 1cd4a3269d656a4 Change: 1d2db89955b9a18
\\?\UNC\TARGET-FILER\testfolder\tr-3828_SnapDrive62WindowsBestPractices.pdf: Create: 1cd4ee636ba7040 LastA: 1d2db8989a3907c LastWr: 1cd49f3b27efb4b Change: 1d2db8995689448
\\?\UNC\TARGET-FILER\testfolder\tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf: Create: 1cd4ee636c37136 LastA: 1d2db8989d0e1bc LastWr: 1cd496422779fad Change: 1d2db89958c6ea4
	                   7 -A--------D--	\\?\UNC\SOURCE-FILER\testfolder\
FindNextFile() Difference = - 16845h:22m:20.6824330s	\\SOURCE-FILER\testfolder\Thumbs.db
FindNextFile() Difference = - 17835h:22m:32.9412450s	\\SOURCE-FILER\testfolder\tr-3197_TEchnicalOverviewofSnapDrive.pdf
FindNextFile() Difference = - 17835h:22m:32.9712430s	\\SOURCE-FILER\testfolder\tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf
FindNextFile() Difference = - 17835h:22m:33.3192590s	\\SOURCE-FILER\testfolder\tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf
FindNextFile() Difference = - 17835h:22m:33.5532690s	\\SOURCE-FILER\testfolder\tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf
FindNextFile() Difference = - 17835h:22m:33.6363180s	\\SOURCE-FILER\testfolder\tr-3828_SnapDrive62WindowsBestPractices.pdf
FindNextFile() Difference = - 17835h:22m:33.8692850s	\\SOURCE-FILER\testfolder\tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		   25088 --SH---------	Thumbs.db
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		  754236 -A-----------	tr-3197_TEchnicalOverviewofSnapDrive.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		   98707 -A-----------	tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		   3.4 m -A-----------	tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		   2.3 m -A-----------	tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		  755698 -A-----------	tr-3828_SnapDrive62WindowsBestPractices.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
	      Ge„ndert		   2.3 m -A-----------	tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
------------------------------------------------------------------------------
           Insgesamt   KopiertšbersprungenKeine šbereinstimmung    FEHLER    Extras
Verzeich.:         1         0         0         0         0         0
  Dateien:         7         7         0         0         0         0
    Bytes:    9.68 m    9.68 m         0         0         0         0
   Zeiten:   0:00:00   0:00:00                       0:00:00   0:00:00
Geschwindigkeit:           406352480 Bytes/Sek.
Geschwindigkeit:           23251.675 Megabytes/Min.
   Beendet: Freitag, 2. Juni 2017 12:18:40

  

 

OK. So the same command with Robocopy as it shipped with Windows XP:

 

 

 

-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robust File Copy for Windows     ::     Version XP010
-------------------------------------------------------------------------------

  Started : Fri Jun 02 12:28:25 2017

OSVersion : 6.2 (9200) 23F00206

   Source : \\?\UNC\SOURCE-FILER\testfolder\
     Root : "\\SOURCE-FILER\fpm\"
  VolName : "fpm"
   Serial : 80000415
  MaxName : 255
   FSFlag : 000400CF
   FSType : "NTFS"
   IsNTFS : 1

     Dest : \\?\UNC\TARGET-FILER\testfolder\
     Root : "\\TARGET-FILER\data\"
  VolName : ""
   Serial : 3B02D496
  MaxName : 255
   FSFlag : 0004004F
   FSType : "NTFS"
   IsNTFS : 1

    Fudge : 0

    Files : *.*
	    
  Options : *.* /S /E /COPYALL /PURGE /MIR /DEBUG /R:1000000 /W:30 

------------------------------------------------------------------------------

	                   7 -A--------D--	\\?\UNC\SOURCE-FILER\testfolder\

------------------------------------------------------------------------------

                Total    Copied   Skipped  Mismatch    FAILED    Extras
     Dirs :         1         0         1         0         0         0
    Files :         7         0         7         0         0         0
    Bytes :    9.68 m         0    9.68 m         0         0         0
    Times :   0:00:00   0:00:00                       0:00:00   0:00:00

    Ended : Fri Jun 02 12:28:25 2017

 

 

 

And finally, the "new" robocopy again but (re-) copying the folder to a Windows Fileshare (my 8.1 workstation in this case):

 

 

 

-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robustes Dateikopieren fr Windows                              
-------------------------------------------------------------------------------
  Gestartet: Freitag, 2. Juni 2017 12:33:38
OSVersion : 6.3 (9600) 25800306
   Quelle : \\?\UNC\SOURCE-FILER\testfolder\
     Root : "\\o1903cifs\fpm\"
  VolName : "fpm"
   Serial : 80000415
  MaxName : 255
   FSFlag : 000400CF
   FSType : "NTFS"
   IsNTFS : 1
     Ziel : \\?\UNC\WINDOWS\testfolder\
     Root : "\\o0001\dmp\"
  VolName : "E_DATA"
   Serial : 6431E6C3
  MaxName : 255
   FSFlag : 00C700FF
   FSType : "NTFS"
   IsNTFS : 1
    Fudge : 0
    Dateien : *.*
  Optionen: *.* /S /E /COPYALL /PURGE /MIR /DEBUG /R:1000000 /W:30 
------------------------------------------------------------------------------
\\?\UNC\SOURCE-FILER\testfolder\tr-3197_TEchnicalOverviewofSnapDrive.pdf: Create: 1cd4ee6369beb3e LastA: 1d2db8bad9a9546 LastWr: 1cd49f3a4f22f93 Change: 1d093933059611c
\\?\UNC\SOURCE-FILER\testfolder\tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf: Create: 1cd4ee636a428c6 LastA: 1d2db8bad9df0a6 LastWr: 1cd486bafe9d4dd Change: 1d093933059af46
\\?\UNC\SOURCE-FILER\testfolder\tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf: Create: 1cd4ee636a895aa LastA: 1d2db8badceeb66 LastWr: 1cd49f39598b963 Change: 1d093933059fd5c
\\?\UNC\SOURCE-FILER\testfolder\tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf: Create: 1cd4ee636b19682 LastA: 1d2db8badf2edf4 LastWr: 1cd4a3269d656a4 Change: 1d09393305a4b86
\\?\UNC\SOURCE-FILER\testfolder\tr-3828_SnapDrive62WindowsBestPractices.pdf: Create: 1cd4ee636ba7040 LastA: 1d2db8badffbf2a LastWr: 1cd49f3b27efb4b Change: 1d09393305a999c
\\?\UNC\SOURCE-FILER\testfolder\tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf: Create: 1cd4ee636c37136 LastA: 1d2db8bae228938 LastWr: 1cd496422779fad Change: 1d09393305ae7b2
\\?\UNC\SOURCE-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db8bad8e6050 LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44
\\?\UNC\WINDOWS\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db8994f3c65e LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44
\\?\UNC\WINDOWS\testfolder\tr-3197_TEchnicalOverviewofSnapDrive.pdf: Create: 1cd4ee6369beb3e LastA: 1d2db899501cfd8 LastWr: 1cd49f3a4f22f93 Change: 1d093933059611c
\\?\UNC\WINDOWS\testfolder\tr-3483_ThinProvisioningInANetAppSANorIPSANEnterpriseEnvironment.pdf: Create: 1cd4ee636a428c6 LastA: 1d2db8995068ac8 LastWr: 1cd486bafe9d4dd Change: 1d093933059af46
\\?\UNC\WINDOWS\testfolder\tr-3563_ThinProvisioningIncreasesStorageUtilizationWithOnDemandUtilization.pdf: Create: 1cd4ee636a895aa LastA: 1d2db89953c1968 LastWr: 1cd49f39598b963 Change: 1d093933059fd5c
\\?\UNC\WINDOWS\testfolder\tr-3702_BestPractivesForMicrosoftVirtualizationAndSnapManagerHyperV.pdf: Create: 1cd4ee636b19682 LastA: 1d2db8995601bf6 LastWr: 1cd4a3269d656a4 Change: 1d09393305a4b86
\\?\UNC\WINDOWS\testfolder\tr-3828_SnapDrive62WindowsBestPractices.pdf: Create: 1cd4ee636ba7040 LastA: 1d2db89956d3b74 LastWr: 1cd49f3b27efb4b Change: 1d09393305a999c
\\?\UNC\WINDOWS\testfolder\tr-3965_ThinProvisioningDeploymentAndImplementationGuideDataOntap87-mode.pdf: Create: 1cd4ee636c37136 LastA: 1d2db899590efc4 LastWr: 1cd496422779fad Change: 1d09393305ae7b2
	                   7 -A--------D--	\\?\UNC\SOURCE-FILER\testfolder\
			SR GROUP OWNER DACL SACL - SECURITY_DESCRIPTOR_CONTROL
			     D     D   PIDP PIDP 
			SR   -     -   YI-- YI-- - Source
------------------------------------------------------------------------------
           Insgesamt   KopiertšbersprungenKeine šbereinstimmung    FEHLER    Extras
Verzeich.:         1         0         0         0         0         0
  Dateien:         7         0         7         0         0         0
    Bytes:    9.68 m         0    9.68 m         0         0         0
   Zeiten:   0:00:00   0:00:00                       0:00:00   0:00:00
   Beendet: Freitag, 2. Juni 2017 12:33:38

 

  

So here lies the problem then. It is a difference in the "change" timestamp. When the files are copied to a Windows Fileshare, Robocopy reads that value from the files correctly.

However when Robocopy get's the "last changed" timestamp from the NetApp, different values are showing!

 

Have a look at this comparison from the robocopy "to windows":

 

\\?\UNC\SOURCE-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db8bad8e6050 LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44
\\?\UNC\WINDOWS\testfolder\Thumbs.db:      Create: 1ce4bf5d00dcf28 LastA: 1d2db8994f3c65e LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44

 

and now what happens when we copy to NetApp:

 

 

\\?\UNC\SOURCE-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db8994f3c65e LastWr: 1ce4bf5dbc19250 Change: 1d0b3fd4d24fd44
\\?\UNC\TARGET-FILER\testfolder\Thumbs.db: Create: 1ce4bf5d00dcf28 LastA: 1d2db89890de0ea LastWr: 1ce4bf5dbc19250 Change: 1d2db8994f02cce

 

 

See the difference? On the NetApp the "change" value for the file differs from the value on the Windows fileshare. The value does NOT change when I re-run robocopy by the way, on neither Windows nor NetApp. 

 

By the way, the "translation" for the timestamps:

 

1d0b3fd4d24fd44 =>   01.07.2015 14:56:09

1d2db8994f02cce =>  02.06.2017 12:18:30

 

 

So this is it them - when Robocopy copies a file to a Windows fileshare it actually successfully copies "changed" timestamp. But it is NOT able to do so on a NetApp fileshare where the value corresponds with the time when the file was actually copied. And since Robocopy is not able to properly fix this timestamp it "re-tries" to fix it every single time we run robocopy. So as I said earlier, this seems to me to be "on the NetApp side of things".

 


None of this helps us though. Because we all know that opening a case for this is futile, but NetApp's and Microsoft's L1 will blame the other company and send me away. Anyone got an "inside contact" who might be in a position to have a look at this?

 

 

 

Regards

 

Chris

What determines a AD DC discovered status

$
0
0

Looking at a customers CIFS SVMs we note that some of the DCs connected to the SVM report as slow an one unavailable.

 

Is there any detail about these conditions and what tests the SVM uses to determine the condition of a discovered server.  i.e. if a DC is labled as slow, what is the definition of slow that has been used?


Possible DC statis results

 

OK

Unavailable

Slow

Expired

Undetermined

Unreachable

 

Volume Full and Nearly Full Thresholds Info from PSTK

$
0
0

Anyone know if there is any cmdlets available output volume full and nearly full thresholds?

 

I noticed that it is now available for Aggregates ( v8.3+) for cDOT from Get-NcAggrOption ( full_threshold,nearly_full_threshold), but did not see the equivalent properties for Volumes from Get-NcVolOption. It will be handy and much easier to have such attributes from ONTAP directly other than going through OCUM. I am still using DataONTAP PSTK v3.2.0, not sure if this version matters.

 

Thanks,

Timothy

How to avoid maxdirsize excess

$
0
0

Hi.

 

On my FAS2554 cluster running Data Ontap 8.2.3P9 I reached maxdirsize on one directory of a volume.

For now I increased the maxdirsize value for this volume and now I am archiving some old data just to solve the immediate problem, but I need to definitively solve this problem to avoid it in the future.

The problem is that I cannot predict how many files will be added to this directory, just because they are thousands of new static files each days, automatically created depending on new contents created by the systems (they are attachments of contents). The value of new content every day is not regular.

 

I was thinking about organizing these files in a subdirectories tree, but NetApp Support told me that this would not solve the problem as the maxdirsize is calculated on the parent directory and all subdirectories are taken for its calculation.

 

So how I can solve this problem without creating new volumes? I want to apply a scalable way, and I wish the only limitation to be the volume size and not any other parameters I could not manage automatically.

 

Could you help me please?

Thanks!


Re: Listing all connected clients to a volume?

$
0
0

Hi David,

Thanks but could you let me know what version of ONTAP you used?

 

I only get the following output fields even in diag mode. There is no client information in the output.

 

::*> statistics top file show -sort-key total_ops -max 20

filer01 : 6/5/2017 16:50:11
*Estimated
     Total
      IOPS       Node  Vserver             Aggregate Volume File
---------- ---------- -------- --------------------- ------ ---------------------------------------------------------------------------

 

Thanks,

 

Re: What determines a AD DC discovered status

$
0
0

Hi Steve,

 

Without looking through ONTAP source code, i'd assume that the SVM ("client machine") uses DNS to locate the domain controller as per the following:

 

https://msdn.microsoft.com/en-au/library/cc717360.aspx

 

To locate domain controller (DC) hosting NC N, the client machine issues a DNS query for the SRV record _ldap._tcp.dc._msdcs.N, constructed from the NC name (N).

 

I don't have details on the status results but i'd also assume that "slow" referres to domain controllers that have been identified using a slow link detection algorthim based on ICMP.

Is there a particular issue you are trying to troubleshoot? If so i'd verfiy the AD sites and services configuration and site network link speeds.

 

/Matt

Manage SVM DR from SystemManager

$
0
0

Hi all,

 

I'm in a process of writing a DR procedure for a customer and I would like to propose it the SVM DR solution for their CIFS Share instead of multiple SnapMirror Volume replication.

All the FAS are running the latest ONTAP 9.1, the SVM DR is running perfectly but I wonder if the future roadmap include the possibility to manage an activated DR SVM from the destination SystemManager, in case of a long disaster scenario.

 

From now, even when I start the destination SVM and stop the source one, I cannot manage the volume, share or quota with web interface. For a customer point of view, it would be very more efficient, intuitive and user-friendly to access the configuration of the destination SVM from SysMgr Smiley Happy

Re: How to avoid maxdirsize excess

$
0
0

Each subdirectory only counts as one file under that directory. If you have one file per subdirectory you will have the same problem, but if you group files into fewer subdirectories you won't have an issue.

 

For example:

 

Dir1

   File1

   File2

   File3

   File4

 

For the below layout, Dir2 and Dir3 will take up directory space of Dir1, but File1-File4 will not. They will take it only from their parent directory (Dir2/Dir3). You can test this with the ls -l command and add files and watch the directory size of the parent.

 

Dir1

   Dir2

      File1

      File2

   Dir3

      File3

      File4

 

ls -l

drwxr-xr-x 3 username group 34 Jun 6 09:34 testdir

 

The 34 is the directory size... add a directory with in test and add files to that subdirectory and you will see the directory size of testdir will not change.

 

Good luck,

Alex

8-Node MetroCluster Cisco MDS 9148S RCF

$
0
0

Hi all,

 

does anybody have a Cisco 9148 RCF for an 8-Node MetroCluster using 6500N Attos? I'm about to write the config by myself, but I first want to ask the community if maybe somebody already did this, because on the support page you only find RCF for 4-Node Metroclusters!

 

Thank you!

 

Regards,

Netsu

Snapmirror initialize and TCP window size

$
0
0

Is it possible to alter the default TCP window size on a snapmirror initialize?   We have a need to migrate clients to a new IDC,  The link has signficiant latency and the snapmirror process is extremely slow and runs for days.  Normal production snapmirror between DR sites is fine and i don't want to impact current production..    I would like to test a larger TCP Window size for an individual replication.    I have read that an individual transfer can have an altered TCP window size as defined in the snapmirror.conf file.   The problem migrations are 'one and done' type operations,  the cli is used to run the 'snapmirror initialize' command.  Is there a way to altrer the TC{ window size in the initialize command ?   Is there a way to have the initialize command read the snapmirror.conf file?    Any suggestions woulld be appreciated.   currently testing in 7-mode envrionment running 8.2.3P2,   eventually Cdot environments will likely need to be addressed as well.     

Re: MPIO with Snapdrive


Re: Snapmirror initialize and TCP window size

Getting interface CRC errors using NMSDK/API

$
0
0

Hi all,

I am working on automation to monitor all 7mode/cmode storage systems we have by checking for netport port status. Through the automation, we want to check if the ports part of ifgroup are up and working, MTU settings and if there are any CRC errors. 

 

Using NMSDK, i can get ifgroup and MTU details of the interfaces. I found no option to get CRC errors listed for an interface in the system. Is there any option we have to get CRC error details from netapp API? 

 

RECEIVE
Frames/second: 25446 | Bytes/second: 16472k | Errors/minute: 0
Discards/minute: 0 | Total frames: 137g | Total bytes: 281t
Total errors: 0 | Total discards: 0 | Multi/broadcast: 0
No buffers: 0 | Non-primary u/c: 0 | L2 terminate: 0
Tag drop: 0 | Vlan tag drop: 0 | Vlan untag drop: 0
Vlan forwards: 0 | Vlan broadcasts: 0 | Vlan unicasts: 0
CRC errors: 0 | Runt frames: 0 | Fragment: 0
Long frames: 0 | Jabber: 0 | Bus overruns: 0
Queue drop: 0 | Xon: 0 | Xoff: 0
Jumbo: 2833m

Correct process of removing volume from Aggregate when no longer needed?

$
0
0

Hi All

 

Just a quick question about removing snapmirrored/vaulted volumes as I have had an issue in the past and just wanted to confirm the correct steps to be taken.

 

The previous issue I had was related to deleting a snapvault relationship after clearing an old relationship. This relationship sat on our destination filer and vaults from a snapmirror relationship which is also replicated to this filer. It had originally removed from view but reappeared. When trying to release the relationship I was getting a message saying "no releaseable destination found that matches those parameters".

 

We have 2 NetApp filers at 2 separate sites, snapmirror occurs on the volumes from Site A to Site B, on Site B we then use SnapVault to a separate volume for retention backups. We are using 7-mode 8.2.3P3 on both filers.

 

I normally follow the below steps when removing volumes. All of our volumes are fibre channels LUNs presented to VMware for serving datastores to our virtual machines.

 

  1. Perform VMware tasks such as unmounting and deleting the datastores in the vSphere web client
  2. Break snapmirror relationship for volume being decommissioned by doing a quiesce followed by a break, I normally do this from the source filer. Check relationship is removed from view in Production and DR filers using On command system manager.
  3. On Source NetApp filer, go to LUNs, and offline the LUN in question, followed by deletion of LUN.
  4. On source NetApp filer, go to volumes, offline the volume and then delete the volume.
  5. On destination filer remove any schedules for the snapvault relationship to be removed.
  6. On destination filer stop the related snapvault relationship, followed by release.
  7. Once snapvault relationship deleted, remove snapvault source and destination volume once sufficient snaps have built up by offline LUN, delete LUN, offline volume and delete volume.

Any steps I have missed of any advice if there is a better way to do this would be much appreciated.

 

Many Thanks in advance for your help.

 

 

 

 

  

 

Re: Correct process of removing volume from Aggregate when no longer needed?

$
0
0

You got pretty much everything covered. but there should be some slight change in order.

 

Step 2 should be on Destinatin 

Step 6 should be on Source 

 

quiesce, break, [delete] & release

That mean, step 6 should be just after your step-2

 

Once remove the snapvault/snapmirror relationship completly, (no relationship reporting between volume from sourse as well as destination cluster)

then only i'll proceed with LUN/volume unmaping, offline and delete process.

 

This link might help you.

https://kb.netapp.com/support/s/article/ka31A00000013XjQAI/how-to-correctly-delete-a-snapmirror-relationship

 

thanks,

Robin.

Re: Snaplock Compliance aggregate

$
0
0

I tried zeroing disks on the server(IBM system x 3650) using erasing tool ,and it failed.
All process of zeroing were finished with no-fatal errors ,but the following scsi error occured.
 ---
 * fail scsi Disk IBM ServerRAID MF5015 2.12 837GiB(898GB) 001709a72368e8c820d0251F04b
 ---

But this trouble was solved by other way.
Compliance aggregate was deleted with following ontap command .

::*>volume lost-found show
::*>volume lost-found delete -node "nodename" -dsid "DSID"
::*>run -node "nodename" aggr offline "aggr name"
::*>aggr remove-stale-record -aggregate "aggr name" -nodename "nodename"

off course it is necessary every files in compliance volumes are expired,now all of the drives are able to be reused .

In case some drive is not expired, it's impossible to be reused.

thanks,

Viewing all 19238 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>