Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.

Community Forum > ZPOOL resize on HP Array logical drive? Why are partition used???

Hi there!
Bothering again...

I tried to expand/resize a ZPOOL created on a logical drive off a HP SmartArray P420 RAID controller...
I added drives to the RAID and completed the HP expand/resize operation. Now the disk (from the OS perspective) looks increased in size as it should be.

The problem is the drive is still partitioned like it was before - first partition is the ZFS data partition and then partition 9 which is... what???

Why do you use partitioning on the underlaying disks??? Why you do not use the entire disk (as only whole disks can be selected when creating pools)??

Anyway... how can I now EXTEND/EXPAND the ZPOOL without shutting down/offlining this zpool (and do a manual partition9 move to end and partiton1 resize)?
I would really like to have one contiguos partition for ZFS, instead of having the disk split in two parts (original partition1, partition9-8MB and then partitionN using the new space)??

Please, advise.

What is that partiton9 anyway? is it used by anything QS related? Can I just remote that partition9?

LSBLK shows

sdc 8:32 0 952.2G 0 disk
|-sdc1 8:33 0 405G 0 part
`-sdc9 8:41 0 8M 0 part

parted print:

Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sdc: 1022GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 435GB 435GB zfs zfs
9 435GB 435GB 8389kB


I suppose I could just add another partition to use the free space and then MANUALLY add that partiton to the zpool (as the GUI won't let me select any partition to use as a vdev to expand the pool)

Why aren't the disks used entirely (as whole devices) but a partition is created and used instead? What is the small 8MB partion9???

Best regards,
M.Culibrk

March 4, 2017 | Registered CommenterM.Culibrk

Hello M,

The partitions are created and Manage by ZFS when you pass a raw block device into ZFS for use in the creation of a Storage Pool. ZFS supports auto expanding of Storage Pool space if the underlying Disk LUN has been grown, but it will require a temporary outage of the Storage Pool where no I/O is occurring.

Can you provide a set of logs from the system for review by our support team? We can then confirm if your system will be able to support the auto expand feature of ZFS. If it is supported we can then give you the CLI steps required to enable it and grow the pool.

**Log Gathering**

The procedure for uploading the log files are below.

1) Please login to the QuantaStor via ssh or physically at the system console as the `qadmin` user or a user with similar sudo access, you can also use the root user if you have that configured. Please note that we also provide a convenient option in the WebUI for systems with internet access where you can right click on the Storage System and choosing 'Send log report...'.

Note: if you did not change the password for the qadmin user, the default password is `qadmin`

2) Please run the `sudo qs-sendlogs` command from a shell console to generate the log file. If your system has internet access it will automatically upload the logfiles to our FTP server.

If your system does not have internet access, please proceed to step 3

3) Copy the logfile generated by the `sudo qs-sendlogs` command from your QuantaStor unit using your preferred ssh/scp/sftp client and upload it using your preferred ftp client to our ftp server.

Our FTP login for uploading the logfiles is below:

```
username: report

password: REP0RT (that 0 is a zero)

ftp server ip: 63.229.31.162

port: 821
```

Please let us know once you have had a chance to upload the logs.

Thank You,
Chris Golden
OSNEXUS Support

March 4, 2017 | Registered CommenterChris Golden

Hi!

Thanks for the quick response!

I was running out of time so I went the manual way (oh, my... what an experience was that!)

It seems totally "reasonable" to me that ZFS won't "autoexpand" on the whole disk all by itself. The reason for that is ZFS is given only the first partition as a "vdev". Besides, the second partition of approx 8MB is created at the end of the disk when the disk is added to the zpool...
So, I do not see any "logical" way ZFS would magically extend to the end of the disk....(it should move the other partition to the end, extend its partition etc to be able to expand).
And I tested it myself (the hard way).

After the disk was expanded the small 8MB partition is "in the middle" of the disk, right after the zfs partition. I tried to stop/start the pool, export/import.... nothing happened. ZFS would not expand. It also constatly reported "--" in the field "EXPANDSZ" which is logical - its partition was used entirely.
So, I tried to remove just the smap 8MB partition and repeat all the offline/online, stop/start, export/import... nothing. EXPANDSZ was still "--"
Then I manually resized the (now only) zfs partition with gdisk (delete partiton, recreate with increased size, manually re-set the same part GUID etc etc).
NOW finally zpool/zfs showed EXPANDSZ=xxxGB !! (the autoexpand property id off by defult. It that right??)
So, I sent the "autoexpand=on" and the pool finally expanded to the new size of the disk! Huray!

BUT... the pool export/import removed the pool and volumes from all the config (host assignment etc) AND changed at least the volume LUN I'm not sure about the eui. So, after all this gymnastics I ended up with an "inaccessible" datastore in vSphere...
Hopefully, only a few VM was on that datastore so it was not a big problem to remove/reimport those in vsphere but stil...

The whole "process" seems a little to "bad/scarry"...
I searched through some ZoL forums/mailing lists and found that ZoL by default makes the 8MB partition when it is given a whole disk to use (as they say "for UEFI/Solaris/upstream "compatibility"")...
I do not know it this behavior can be modified (module parameter??) but as it is... it's a pain

Is there a way to manually set the LUN and EUI of a storage wolume in QS?

Thanks for all the help and best regards,
M.Culibrk
and vSphere claiming "the datastore signature changed" etc etc...

March 6, 2017 | Registered CommenterM.Culibrk

Hello M,

We had requested logs so that we could review that state of the system and to give you valid steps to try and correct your problem.

The steps that you have taken were unnecessary and have put the system in an unknown state that we will be unable to troubleshoot at this point.

The partitions are created when you first create the pool and you should never need to go in and manually re-partition the disk. If there is a problem expanding the disk this is an issue that we need to research and determine why it is not able to expand and correct the problem.

Since the system is in an unknown state at this time I would recommend you remove the current configuration and start again with a new configuration to see if you see the same symptom.

If it still will not "autoexpand" after reconfiguring the system let us know so we can look at the configuration and logs to determine the true cause.

If this is a production system with a valid license and you need immediate assistance please open a support case with us and we can assist you with this issue.

Thank You,
Robert Staffeld
OSNEXUS Customer Support
email: support@osnexus.com
t: 1.866.219.1757 | 425-279-0172

March 9, 2017 | Registered CommenterRobert Staffeld

Sorry... I was in a hurry...
This is not a real "production" system but is being used so I cannot just "erase and start over".

But, I'll test the same exact situation shortly on another machine I have currently "free" and let you know.

But, just by "reasonable thinking" I doubt it will be any different.
Once you assign a "whole device" to a zpool the partitions are automatically made on the device (and as far as I understood from the ZoL documents/forums) - for "compatibility reasons" (UEFI).
When you later on extend the device I doubt ZFS/ZPOOL will just "extend itself" as it was given just the partition to use and not the entire disk.
I really dubt the zfs/zpool code would re-partition the drive (or move the other partition to the end of the drive as it was whet it was created) and extend its partition to fill the unused space...
this will be really easy to test.

I'll let you know shortly - and send you logs before and after the "expansion".

Regards,
M.Culibrk

March 12, 2017 | Registered CommenterM.Culibrk