Friday, April 2, 2010

How to extend LVM on VMWare Guest running Linux

Recently I found my one of virtual Linux machine which running on VMWare ESX server have run out of disk space. And after I googled the Internet, I think it's quite simple as my virtual Linux machine is running on LVM.

And I also found the following solution will also apply to any VMWare virtual machine which running Linux OS with LVM.

Expand the disk

Turn off the virtual machine that you wish to extend disk space.


You can expand the virtual disk size by entering ESX server hidden console mode or using VMware Infrastructure Client to remote manage your ESX Server. I will introduce both way here.

  1. Method One:

    Since the GUI mode is easy. Click Your VMWare Guest Machine in your remote management console from VMware Infrastructure Client. Then click "Edit Virtual Machine Settings" from "Getting Started" which is in the Right Panel. Select Hardware -> Hard Disk 1 -> Capacity - New Size (at Right Side), and then expand the Capacity to whatever size that you need. In our case, I expanded from 100GB to 200GB. Then press "OK".


  2. Method Two:

    For ESXi Server, using the following command to enter ESXi hidden console mode after press "Alt"+"F1":

    unsupported



    Then press "Enter", now enter your ESXi server root password. You should be able to successfully log into the hidden ESXi Server console.

    For ESX Server, If you press Alt-F1, you can get access to the Linux login prompt of the ESX service console and login to a command prompt.


Now enter the following command to expand the VM Disk:

vmkfstools -X 200G "Your VMDK file here"


To extend an existing Virtual Disk to 200GB.

Note:

If you are running VMware workstation product, then you can expand the virtual disk to expand the virtual disk on your Windows machine by running this command:

vmware-vdiskmanager -x 200G "My harddisk.vmdk"


After above steps, you finished preparing the extra virtual disk space. Now we can start your VMware virtual machine and open a terminal session to continue expanding your LVM.

Issue the df -k command which shows us that the logical volume is at 100% usage. We need to create a partition on /dev/sda. Type ls -al /dev/sda* which lists the existing partitions. Our new partition will be /dev/sda3.

# ls -al /dev/sda*
brw-r----- 1 root disk 8, 0 Apr 2 19:26 /dev/sda
brw-r----- 1 root disk 8, 1 Apr 2 19:26 /dev/sda1
brw-r----- 1 root disk 8, 2 Apr 2 19:26 /dev/sda2


Type fdisk /dev/sda then type n for new partition. Enter p and 3 for the partition number (in this instance, obviously enter the partition number that matches your environment.) Also accept the default First and Last cylinders. Finally type w to write table to disk and exit.

If prompted to reboot then do so to ensure that the new partition table is written. After a restart, type ls -al /dev/sda* to verify whether the new partition was created successfully.

# ls -al /dev/sda*
brw-r----- 1 root disk 8, 0 Apr 2 19:26 /dev/sda
brw-r----- 1 root disk 8, 1 Apr 2 19:26 /dev/sda1
brw-r----- 1 root disk 8, 2 Apr 2 19:26 /dev/sda2
brw-r----- 1 root disk 8, 3 Apr 2 19:26 /dev/sda3


After verification of new partition, we need to create a physical volume and add it to the volume group.

Type pvcreate /dev/sda3

[root@rhel5 ~]# pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created


Type vgextend VolGroup00 /dev/sda3

# vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended


We need to extend the logical volume. Type vgdisplay:

# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 99.84 GB
PE Size 32.00 MB
Total PE 3345
Alloc PE / Size 2868 / 99.88 GB
Free PE / Size 2870 / 99.97 GB
VG UUID MkluWy-e5PA-QTGN-fF7k-ZxO3-6bC7-qxfCii


Which shows that there is 99.97GB free that we can add to the volume group. To extend the volume type lvextend -L +99.97G /dev/VolGroup00/LogVol00. If you got an error message that stated that there was "Insufficient free space: 2871 extends needed but only 2870 evailable". After a quick google: Insufficient Free Extents for a Logical Volume. It seems that we needed to select a lightly smaller size. So we changed to 99GB rather than 99.97GB which solved this problem.

# lvextend -L+99G /dev/VolGroup00/LogVol00
Rounding up size to full physical extent 99 GB
Extending logical volume LogVol00 to 198 GB
Logical volume LogVol00 successfully resized


Finally we need to resize the file system by typing resize2fs /dev/VolGroup00/LogVol00.

resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 20101888 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 20101888 blocks long.


Type df -k to see if the new space is available to the logical volume.

df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
1713916000 82428400 88250280 48% /
/dev/sda1 101086 24538 71329 26% /boot
tmpfs 1037744 0 1037744 0% /dev/shm


The logical volume has now been resized and now has used space of just 48%!

Thursday, April 1, 2010

How to upgrade Ubuntu Server 9.04 to 9.10 (to 10.04)

The upgrade of Ubuntu Server 9.04 to 9.10 is not as easy as I thought. I tried the a few different methods which I found from the Internet. But none of them are useful. And finally I found a way to solve the issue by myself. If you had same trouble as mine. You should read though this article.


Basically here are the steps which most peoples would refer to:

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get install update-manager-core

$ sudo do-release-upgrade


But I can tell you these steps doesn't help in my case. I still got the message when I did the last step:
Checking for a new ubuntu release
No new release found


I also tried the following step:

$ sudo apt-get dist-update


Still same thing, when I typed command:
$ cat /etc/lsb-release


It still shows me, it's Jaunty version 9.04 which is not what I want:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=9.04
DISTRIB_CODENAME=Jaunty
DISTRIB_DESCRIPTION="Ubuntu 9.04"


Finally I got the idea from Ubuntu Repositories. Basically you need to edit the file /etc/apt/sources.list, and change all "Jaunty" to "Karmic". Then do the following steps again:

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get dist-upgrade


After reboot, you will see your Ubuntu version is changed to Karmic version 9.10.

$ cat /etc/lsb-release


DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=9.10
DISTRIB_CODENAME=Karmic
DISTRIB_DESCRIPTION="Ubuntu 9.10"


Hope this guide will help you to upgrade your Ubuntu Server from 9.04 to 9.10 successfully.

Tips:

Using VI editor tool to replace Text "Jaunty" to "Karmic" at once:
$ vi /etc/apt/source.list

Then type the following command at the bottom of the VI editor:
:%s/karmic/lucid/g

You will see the following message:
18 substitutions on 18 lines

Now save and quit.
:wq!


Done!

Update on 2-Apr-2010:

Just checked Ubuntu Web Site, Ubuntu release Lucid (10.04) beta version is ready. So I modified the file /etc/apt/sources.list again, and replaced all "Karmic" with "Lucid". Then do the following steps:

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get dist-upgrade


After reboot, you will see your Ubuntu version is changed to Karmic version 9.10.

$ cat /etc/lsb-release


DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=Lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04"


Friday, February 19, 2010

Cannot create a VM file larger than 256GB in ESX 4.0

When I tried to create a 500GB virtual disk on a local storage in ESX 4.0 Server, I got the error message "Cannot create a VM file or virtual disk larger than 256GB" in vSphere Client.

I googled the Internet, I only found the solution is to Increase block size of local storage in ESX 4.0. And you only can do it during the ESX 4.0 installation.


Increasing block size of local storage in ESX 4.0

Symptoms

* You cannot create a file (virtual disk) larger than 256 GB on local storage.
* You cannot reformat local storage (device or resource busy).

Resolution

ESX 4.0 uses a different installer than previous versions of ESX, and uses default block size of 1 megabyte when creating the local VMFS volume. The largest file that can be created with a 1 megabyte block size is 256 GB in size.

To create a file bigger than 256 GB, the VMFS filesystem needs to have a block size larger than 1 MB. The maximums are as follows:






Block SizeMaximum File Size
1 MB256 GB
2 MB512 GB
4 MB1 TB
8 MB2 TB


For more information about block sizes, see Verifying the block size of a VMFS data store (1003565).

The service console in ESX 4.0 runs as a virtual machine on local storage. As such, you cannot reformat this volume.

To resolve this issue, perform one of these workarounds:

  • Re-install the ESX host on a different drive (for example, a second RAID set or boot from SAN), and leave the original disk for the VMFS volume. You can then choose your blocksize when creating the second datastore.

  • Install ESX 3.5, create the volume with desired blocksize, then upgrade to ESX 4.0.

  • Carve out a new LUN or RAID set on the local controller for a new volume. Add physical disks as necessary.


You cannot create a second datastore on the same drive via the ESX GUI. You must use the following command:

Note: You may need to create a partition on the free space first with fdisk.

vmkfstools -C vmfs -b Xm -S local2mBS /vmfs/devices/disks/naa.xxxxxxxxxx:y


where:

  • Xm is the blocksize (1m, 2m, 4m, or 8m).

  • local2mBS is your volume name. If the volume name has a space (for example, volume name), enclose it in quotation marks (for example, "volume name").

  • naa is the naa identifier, and y is the partition number. To determine this, run ls -la in the /vmfs/devices/disks folder.


Note: Depending on your disk controller type, naa. may be replaced with eui., t10., or mpx.. For more information, see Identifying disks when working with VMware ESX (1014953).

Alternatively, you can reconfigure the installer to install ESX 4.0 with a different blocksize:

  1. Boot the ESX installation DVD.

  2. Press Ctrl+Alt+F2 to switch to the shell.

  3. Run:

    ps | grep Xorg


  4. Kill the PID which shows Xorg -br -logfile ....

    For example, run:

    kill 590


    where 590 is the PID.

    Note: If you specified the a GUI mode installation, killing the process identified as Xorg may switch you back to another console. If this occurs, press Ctrl+Alt+F2 to return to the console.

  5. To switch to the configuration directory, run:

    cd /usr/lib/vmware/weasel


  6. To edit the configuration script, run:

    vi fsset.py


    Note: For more information on editing files, see Editing configuration files in VMware ESX (1017022).

  7. Locate class vmfs3FileSystem(FileSystemType):.

  8. Edit the blockSizeMB parameter to the block size that you want. It is currently be set to 1. The only values that work correctly are 1, 2, 4, and 8.

    Note: Press i for insert mode.

  9. To save and close the file, press Esc, type :wq, then press Enter.

  10. To switch back to the root directory, run:

    cd /


  11. To launch the installer with the new configuration, run:

    /bin/weasel


    And continue the ESX 4.0 installation.

How to rebuild an RPM package

This is the 1st time for me to patch and rebuild an RPM package, when I tried to add Unicode Support on CentOS 5.4 with PHP and PCRE. I received some PHP warning message when I had the regex testing characters (‘\X’, ‘\pL’, etc) inside of a character class, such as ‘[\X-]‘. After I googled the Internet, I found that's because the Unicode Support is missing in the PCRE.

Note: This is an example which using PCRE SOURCE RPM package.



Since I downloaded PHP 5.3.1 and compiled manually by enable the configure option:

--with-pcre-regex=/usr


And PHP 5.3.x includes PCRE support built in; however, the yum package for PCRE is not built with Unicode support; so, need to download the rpm and patched according to this page:

http://gaarai.com/2009/01/31/unicode-support-on-centos-52-with-php-and-pcre/

This is needed for Unicode regexp support, so we can do input validation in a variety of character sets.

By default, at least on a Red Hat box, rpm uses /usr/src/redhat as the location of the %_topdir macro, which specifies where most of the work involved in building an RPM takes place.

You can and should change this; it is a good idea to make this a directory that you can write to with a non-privileged account, to avoid compiling and building packages as root.

Why?

A lot of commands get executed when building a package. Sometimes things go wrong. If you're root, important things may be damaged. A big mess may be made. I once (foolishly) rebuilt a proftpd package as root, and the "make install" stage blew up and left newly compiled files all over the place, whereas if I'd been a regular user, I'd have simply gotten a bunch of "permission denied" messages.

  1. Anyway, the macro is easily changed by adding something like the following to ~/.rpmmacros:

    # Path to top of build area
    %_topdir /home/you/src/rpm




  2. If you have never worked on RPMs in this manner, you will need to create a few directories in which to work. I use a sub-directory in my homedir:

    #> mkdir -p ~/src/rpm
    #> cd ~/src/rpm
    #> mkdir BUILD RPMS SOURCES SPECS SRPMS
    #> mkdir RPMS/i[3456]86 RPMS/noarch RPMS/athlon




  3. Download the PCRE Source RPM, and install it:

    #> wget ftp://ftp.pbone.net/mirror/ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/pcre-6.6-2.el5_1.7.src.rpm

    #>rpm -ivh pcre-6.6-2.el5_1.7.src.rpm


    Then install the following package from iso image if you didn't install them during the 1st time OS installation:

    #> rpm -ivh beecrypt-4.1.2-10.1.1.x86_64.rpm rpm-libs-4.4.2.3-18.el5.x86_64.rpm rpm-4.4.2.3-18.el5.x86_64.rpm elfutils-0.137-3.el5.x86_64.rpm elfutils-libs-0.137-3.el5.x86_64.rpm rpm-build-4.4.2.3-18.el5.rf.x86_64.rpm




  4. Rebuild the PCRE RPM package and reinstall the new PCRE RPM package:

    opened up the ~/src/rpm/SPECS/pcre.spec file and found the following line:

    %configure --enable-utf8


    changed it to include the Unicode properties option:

    %configure --enable-utf8 --enable-unicode-properties


    Then saved and closed the file.



  5. Rebuils the PCR RPM package:

    #> rpmbuild -ba ~/src/rpm/SPECS/pcre.spec
    #> rpm -Uvh RPMS/x86_64/pcre-6.6-2.7.x86_64.rpm RPMS/x86_64/pcre-devel-6.6-2.7.x86_64.rpm –-force





  6. Then run pcretest program, and you should see "Unicode properties support" shown in the result.

    $ pcretest -C
    PCRE version 6.6 06-Feb-2006
    Compiled with
    UTF-8 support
    No Unicode properties support
    Newline character is LF
    Internal link size = 2
    POSIX malloc threshold = 10
    Default match limit = 10000000
    Default recursion depth limit = 10000000
    Match recursion uses stack




I also found a very helpful guide that details this process out very nicely: How to patch and rebuild an RPM package.

Export a File System for remote NFS client

Export a File System for NFS system

This article presents the methods for preparing a set of directories that can be exported to remote NFS clients under Linux.

Under Linux this is can be accomplished by editing the /etc/exports file.


About the /etc/exports File

The /etc/exports file contains an entry for each directory that can be exported to remote NFS clients. This file is read automatically by exportfs command. If you change this file, you must run the exportfs command before the changes can be affected the way the daemon operates.

Only when this file is present during system startup does the rc.nfs script execute the exportfs command and start the service NFS (nfsd) and MOUNT (mountd) daemons.

Edit the exports file and add the following lines:

/dir/to/export host1.yourdomain.com(ro,root_squash)
/dir/to/export host2.yourdomain.com(rw,no_root_squash)


Where:

* /dir/to/export is the directory you want to export.
* host#.yourdomain.com is the machine allowed to log in this directory.
* The ro option mean mounting read-only. And the rw option mean mounting read-write.
* The root_squash option for not allowing root write access in this directory. And the no_root_squash option for allowing root write access in this directory.

For this change to take effect, you will need to run the following command on your terminal:

# /usr/sbin/exportfs -avr


Next thing is configure automatic mapping to this NFS system:

On your remote linux machine, edit the /etc/fstab and add the following line:

NFS_host_name:/dir/to/export /local/mapping/dir nfs hard,intr 0 0


Save and run the following command:

mount -t nfs -a


to mount all nfs entries in /etc/fstab with the corresponding options.

If you want to experiment with rsize and wsize parameters to get that last bit of performance.

NFS_Server_name:/dir/to/export /local/mapping/dir/ nfs rsize=8192,wsize=8192,timeo=20,retrans=6,async,rw,noatime,intr 0 0


Troubleshooting:

  1. As with most things in linux, watch the log files. If you get an error on the client when trying to mount a share, look at /var/log/messages on the server. If you get an error like "RPC: program not registered" that means that the portmap service isn't running on one of the machines. Verify all the processes are running and try again.



  2. The second problem has to do with username mappings, and is different depending on whether you are trying to do this as root or as a non-root user.

    If you are not root, then usernames may not be in sync on the client and the server. Type id [user] on both the client and the server and make sure they give the same UID number. If they don't then you are having problems with NIS, NIS+, rsync, or whatever system you use to sync usernames. Check group names to make sure that they match as well. Also, make sure you are not exporting with the all_squash option. If the user names match then the user has a more general permissions problem unrelated to NFS.



And if you cannot find the solution for your problem, please refer to the following url:

http://www.higs.net/85256C89006A03D2/web/PageLinuxNFSTroubleshooting

Note:

/etc/exports is VERY sensitive to whitespace - so the following statements are not the same:

/export/dir hostname(rw,no_root_squash)
/export/dir hostname (rw,no_root_squash)


The first will grant hostname rw access to /export/dir without squashing root privileges. The second will grant hostname rw privs w/root squash and it will grant EVERYONE else read-write access, without squashing root privileges. Nice huh?

Tuesday, October 27, 2009

How to install Nagios 3 on Ubuntu 9.04

This manual is only covered how to install Nagios 3 on Ubuntu 9.04. Since the Nagios 3 is one of the Ubuntu package, so you can easily and quickly install Nagios 3 on Ubuntu system.

Assume you already install Ubuntu 9.04 OS, now we can start install Nagios 3 software:


  • Install Nagios version 3

    # sudo apt-get install nagios3



  • Create the web user password file after finished installation of Nagios 3

    # sudo htpasswd -c /etc/nagios3/htpasswd.users nagiosadmin
    New password: xxxxxxxx
    Re-type new password: xxxxxxxx



  • Then you should already have a working Nagios!

    Open a browser, and go to http://localhost/nagios3/

    At the login prompt, login as:

    User: nagiosadmin
    Password: xxxxxxxx

    Note: There was a problem with gvfs, you will see the following error message:

    Nagios sent error message "DISK CRITIAL - /home/usr/.gvfs is not accessible:Permission denied"


    This problem is due to very special permissions set by fuse on the .gvfs directory. The workaround for this problem is here:

    >> modify /etc/nagios/conf.d/localhost_nagios2.cfg as follows:

    define command{
    command_name check_all_disks_plus
    command_line /usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -u GB -A -i .gvfs
    }

    define service{
    use generic-service
    host_name localhost
    service_description Disk Space
    check_command check_all_disks_plus!20%!10%
    }

    Then run

    # sudo /etc/init.d/nagios3 restart



  • If you would like to add a new host for monitoring, then you should add the configuration file for it.

    #cd /etc/nagios3/conf.d
    #vi newhosts.cfg

    define host{
    use generic-host
    host_name server1
    alias server one at lab
    address -------- [server1's IP address here]
    }

  • Let's create a new hostgroup for occasion, and add new hosts to it.

    Edit the file hostgroups_nagios2.cfg and add a new group:

    vi hostgroups_nagios2.cfg

    define hostgroup{
    hostgroup_name lab-servers
    alias Lab Servers
    members server1
    }

  • Now let's associate some services to that host

    # vi services_nagios2.cfg


    - find the section called "check that ping-only hosts are up", and change the line:

    hostgroup_name ping-servers


    to

    hostgroup_name ping-servers,lab-servers


  • Verify that your configuration file is ok:

    # nagios3 -v /etc/nagios3/nagios.cfg


    ... You should get:

    Total warnings: 0
    Total errors: 0


    Things look okay - No serious problems are detected during the check.

  • Reload/Restart Nagios Services

    # /etc/init.d/nagios3 restart


  • Go to the web intergace (http://localhost/nagios3) and check the host that you just added.

    Once the new host can be monitored from above URL, you are ready to add all your Servers/PCs/Routers/Network Equipments that would like to monitor.


NOTE:

- This requires a bit of planning, but you should have all the elements for doing this...
- Think well about the logical structure of the files - it should be possible for you to do without doing too much work!

Friday, August 21, 2009

Howto Fix Internal Server Error

Recently I encounter one problem: When running a Perl CGI (or Python) script on Fedora 10 machine, I saw the "Internal Server Error" message in my browser. The message said something like "please check the server's error-log for more information." and contact web server administrator.

I checked the Apache error log, since I am running Fedora, the error log file is located at /var/log/httpd/error_log. The error messages says:


[client 127.0.0.1] (2) No such file or directory: exec of '/var/www/cgi-bin/cgiscriptname.cgi' failed, referer: http://localhost/your.html

[client 127.0.0.1] Premature end of script headers: cgiscriptname.cgi, referer: http://localhost/your.html


I googled the solutions from Internet, but most of them are saying the permissions are not correct. But I changed the permissions for all the files and directories, but still doesn't work, and got the same error messages.

Finally I found the solution by checking the web server's configuration - httpd.conf which located at /etc/htttpd/conf/. By default in Fedora, the cgi-bin section is configured as following:


<Directory "/var/www/cgi-bin">
AllowOverride None<
Options None
Order allow,deny
Allow from all
</Directory>


So I noticed Options None was the one cause the problem. After I change it to the following:

Options ExecCGI


And restart the apache web service, the "Internal Server Error" message is gone. The scripts are running fine now.

By the way, I think the following tips might be useful in your cases.

  1. The perl path or the python path in your script should be match your server environment settings.

    For perl, the header of your script should look like:

    #!/usr/bin/perl


    For Python, the header of your script should look like:

    #!/usr/bin/python


  2. The second thing that you need to check is the permission of your CGI scripts.

    • Your home directory should have permissions of 701

    • Your .www (and any sub-directories containing you scripts) should have permissions of 701

    • Your CGI/Python scripts should have permissions of 701

    • Files that your CGI/Python script needs to read (for example an image file) should have permission of 604


  3. The next thing is checking the owner and gorup of your CGI/Python script. Usually the CGI/Python script and the enclosing directory must have the same owner/group of your web server. You should go to /etc/httpd/conf/httpd.conf to check the web server owner. In Fedora, both should be apache by default.

  4. The last thing that you might need to check is your scripts, some of them may transfered from Windows via FTP service. It should be transfered under ASCII mode. So you need to convert it to Unix line feeds by typing the following command:

    # tr -d '\r' <yourscript.py> yourscriptconv.py


    And run the new script which you converted.


Hope this will help you to solve your Internal Server Error Problem.

Comments System

Disqus Shortname