Friday, February 19, 2010

Cannot create a VM file larger than 256GB in ESX 4.0

When I tried to create a 500GB virtual disk on a local storage in ESX 4.0 Server, I got the error message "Cannot create a VM file or virtual disk larger than 256GB" in vSphere Client.

I googled the Internet, I only found the solution is to Increase block size of local storage in ESX 4.0. And you only can do it during the ESX 4.0 installation.


Increasing block size of local storage in ESX 4.0

Symptoms

* You cannot create a file (virtual disk) larger than 256 GB on local storage.
* You cannot reformat local storage (device or resource busy).

Resolution

ESX 4.0 uses a different installer than previous versions of ESX, and uses default block size of 1 megabyte when creating the local VMFS volume. The largest file that can be created with a 1 megabyte block size is 256 GB in size.

To create a file bigger than 256 GB, the VMFS filesystem needs to have a block size larger than 1 MB. The maximums are as follows:






Block SizeMaximum File Size
1 MB256 GB
2 MB512 GB
4 MB1 TB
8 MB2 TB


For more information about block sizes, see Verifying the block size of a VMFS data store (1003565).

The service console in ESX 4.0 runs as a virtual machine on local storage. As such, you cannot reformat this volume.

To resolve this issue, perform one of these workarounds:

  • Re-install the ESX host on a different drive (for example, a second RAID set or boot from SAN), and leave the original disk for the VMFS volume. You can then choose your blocksize when creating the second datastore.

  • Install ESX 3.5, create the volume with desired blocksize, then upgrade to ESX 4.0.

  • Carve out a new LUN or RAID set on the local controller for a new volume. Add physical disks as necessary.


You cannot create a second datastore on the same drive via the ESX GUI. You must use the following command:

Note: You may need to create a partition on the free space first with fdisk.

vmkfstools -C vmfs -b Xm -S local2mBS /vmfs/devices/disks/naa.xxxxxxxxxx:y


where:

  • Xm is the blocksize (1m, 2m, 4m, or 8m).

  • local2mBS is your volume name. If the volume name has a space (for example, volume name), enclose it in quotation marks (for example, "volume name").

  • naa is the naa identifier, and y is the partition number. To determine this, run ls -la in the /vmfs/devices/disks folder.


Note: Depending on your disk controller type, naa. may be replaced with eui., t10., or mpx.. For more information, see Identifying disks when working with VMware ESX (1014953).

Alternatively, you can reconfigure the installer to install ESX 4.0 with a different blocksize:

  1. Boot the ESX installation DVD.

  2. Press Ctrl+Alt+F2 to switch to the shell.

  3. Run:

    ps | grep Xorg


  4. Kill the PID which shows Xorg -br -logfile ....

    For example, run:

    kill 590


    where 590 is the PID.

    Note: If you specified the a GUI mode installation, killing the process identified as Xorg may switch you back to another console. If this occurs, press Ctrl+Alt+F2 to return to the console.

  5. To switch to the configuration directory, run:

    cd /usr/lib/vmware/weasel


  6. To edit the configuration script, run:

    vi fsset.py


    Note: For more information on editing files, see Editing configuration files in VMware ESX (1017022).

  7. Locate class vmfs3FileSystem(FileSystemType):.

  8. Edit the blockSizeMB parameter to the block size that you want. It is currently be set to 1. The only values that work correctly are 1, 2, 4, and 8.

    Note: Press i for insert mode.

  9. To save and close the file, press Esc, type :wq, then press Enter.

  10. To switch back to the root directory, run:

    cd /


  11. To launch the installer with the new configuration, run:

    /bin/weasel


    And continue the ESX 4.0 installation.

How to rebuild an RPM package

This is the 1st time for me to patch and rebuild an RPM package, when I tried to add Unicode Support on CentOS 5.4 with PHP and PCRE. I received some PHP warning message when I had the regex testing characters (‘\X’, ‘\pL’, etc) inside of a character class, such as ‘[\X-]‘. After I googled the Internet, I found that's because the Unicode Support is missing in the PCRE.

Note: This is an example which using PCRE SOURCE RPM package.



Since I downloaded PHP 5.3.1 and compiled manually by enable the configure option:

--with-pcre-regex=/usr


And PHP 5.3.x includes PCRE support built in; however, the yum package for PCRE is not built with Unicode support; so, need to download the rpm and patched according to this page:

http://gaarai.com/2009/01/31/unicode-support-on-centos-52-with-php-and-pcre/

This is needed for Unicode regexp support, so we can do input validation in a variety of character sets.

By default, at least on a Red Hat box, rpm uses /usr/src/redhat as the location of the %_topdir macro, which specifies where most of the work involved in building an RPM takes place.

You can and should change this; it is a good idea to make this a directory that you can write to with a non-privileged account, to avoid compiling and building packages as root.

Why?

A lot of commands get executed when building a package. Sometimes things go wrong. If you're root, important things may be damaged. A big mess may be made. I once (foolishly) rebuilt a proftpd package as root, and the "make install" stage blew up and left newly compiled files all over the place, whereas if I'd been a regular user, I'd have simply gotten a bunch of "permission denied" messages.

  1. Anyway, the macro is easily changed by adding something like the following to ~/.rpmmacros:

    # Path to top of build area
    %_topdir /home/you/src/rpm




  2. If you have never worked on RPMs in this manner, you will need to create a few directories in which to work. I use a sub-directory in my homedir:

    #> mkdir -p ~/src/rpm
    #> cd ~/src/rpm
    #> mkdir BUILD RPMS SOURCES SPECS SRPMS
    #> mkdir RPMS/i[3456]86 RPMS/noarch RPMS/athlon




  3. Download the PCRE Source RPM, and install it:

    #> wget ftp://ftp.pbone.net/mirror/ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/pcre-6.6-2.el5_1.7.src.rpm

    #>rpm -ivh pcre-6.6-2.el5_1.7.src.rpm


    Then install the following package from iso image if you didn't install them during the 1st time OS installation:

    #> rpm -ivh beecrypt-4.1.2-10.1.1.x86_64.rpm rpm-libs-4.4.2.3-18.el5.x86_64.rpm rpm-4.4.2.3-18.el5.x86_64.rpm elfutils-0.137-3.el5.x86_64.rpm elfutils-libs-0.137-3.el5.x86_64.rpm rpm-build-4.4.2.3-18.el5.rf.x86_64.rpm




  4. Rebuild the PCRE RPM package and reinstall the new PCRE RPM package:

    opened up the ~/src/rpm/SPECS/pcre.spec file and found the following line:

    %configure --enable-utf8


    changed it to include the Unicode properties option:

    %configure --enable-utf8 --enable-unicode-properties


    Then saved and closed the file.



  5. Rebuils the PCR RPM package:

    #> rpmbuild -ba ~/src/rpm/SPECS/pcre.spec
    #> rpm -Uvh RPMS/x86_64/pcre-6.6-2.7.x86_64.rpm RPMS/x86_64/pcre-devel-6.6-2.7.x86_64.rpm –-force





  6. Then run pcretest program, and you should see "Unicode properties support" shown in the result.

    $ pcretest -C
    PCRE version 6.6 06-Feb-2006
    Compiled with
    UTF-8 support
    No Unicode properties support
    Newline character is LF
    Internal link size = 2
    POSIX malloc threshold = 10
    Default match limit = 10000000
    Default recursion depth limit = 10000000
    Match recursion uses stack




I also found a very helpful guide that details this process out very nicely: How to patch and rebuild an RPM package.

Export a File System for remote NFS client

Export a File System for NFS system

This article presents the methods for preparing a set of directories that can be exported to remote NFS clients under Linux.

Under Linux this is can be accomplished by editing the /etc/exports file.


About the /etc/exports File

The /etc/exports file contains an entry for each directory that can be exported to remote NFS clients. This file is read automatically by exportfs command. If you change this file, you must run the exportfs command before the changes can be affected the way the daemon operates.

Only when this file is present during system startup does the rc.nfs script execute the exportfs command and start the service NFS (nfsd) and MOUNT (mountd) daemons.

Edit the exports file and add the following lines:

/dir/to/export host1.yourdomain.com(ro,root_squash)
/dir/to/export host2.yourdomain.com(rw,no_root_squash)


Where:

* /dir/to/export is the directory you want to export.
* host#.yourdomain.com is the machine allowed to log in this directory.
* The ro option mean mounting read-only. And the rw option mean mounting read-write.
* The root_squash option for not allowing root write access in this directory. And the no_root_squash option for allowing root write access in this directory.

For this change to take effect, you will need to run the following command on your terminal:

# /usr/sbin/exportfs -avr


Next thing is configure automatic mapping to this NFS system:

On your remote linux machine, edit the /etc/fstab and add the following line:

NFS_host_name:/dir/to/export /local/mapping/dir nfs hard,intr 0 0


Save and run the following command:

mount -t nfs -a


to mount all nfs entries in /etc/fstab with the corresponding options.

If you want to experiment with rsize and wsize parameters to get that last bit of performance.

NFS_Server_name:/dir/to/export /local/mapping/dir/ nfs rsize=8192,wsize=8192,timeo=20,retrans=6,async,rw,noatime,intr 0 0


Troubleshooting:

  1. As with most things in linux, watch the log files. If you get an error on the client when trying to mount a share, look at /var/log/messages on the server. If you get an error like "RPC: program not registered" that means that the portmap service isn't running on one of the machines. Verify all the processes are running and try again.



  2. The second problem has to do with username mappings, and is different depending on whether you are trying to do this as root or as a non-root user.

    If you are not root, then usernames may not be in sync on the client and the server. Type id [user] on both the client and the server and make sure they give the same UID number. If they don't then you are having problems with NIS, NIS+, rsync, or whatever system you use to sync usernames. Check group names to make sure that they match as well. Also, make sure you are not exporting with the all_squash option. If the user names match then the user has a more general permissions problem unrelated to NFS.



And if you cannot find the solution for your problem, please refer to the following url:

http://www.higs.net/85256C89006A03D2/web/PageLinuxNFSTroubleshooting

Note:

/etc/exports is VERY sensitive to whitespace - so the following statements are not the same:

/export/dir hostname(rw,no_root_squash)
/export/dir hostname (rw,no_root_squash)


The first will grant hostname rw access to /export/dir without squashing root privileges. The second will grant hostname rw privs w/root squash and it will grant EVERYONE else read-write access, without squashing root privileges. Nice huh?

Comments System

Disqus Shortname