I have to handle some machines of commercial places which has a different standards about thier installations. The main difference is about creating a lot of file system instread of the normal few familiar in most distributions (/, /usr, /var, /tmp, /home).
While they create a FS for each product installed on the machine (a habit taken from UNIX when they didn’t have any meaning of installation other than copying files), they also separate the variale files (e.g. logs or other very active files) of each product.
I’m trying to change these standards to be more close to the FHS (and use the advantenges of RPM/DEB), but one of the main questions I get is what will happen when the FS reach 100%. When everything is separated one product can’t affect another, but that costs with a lot of sysadmin overhead. Leaving everhing variale in /var makes things easy, but hold some risk.
I’d be happy to hear what other sysadmins chose to do…
Use LVM…
For security reasons, user writable partitions should be mounted with the nosuid flag. So this would already imply separate partitions for /home, /var and /tmp.
1. In all major Unixes, for the last 15 years,
there are packaging systems roughly
equivalent to RPM/deb.
The problem you talk about is the result of
3rd party vendors who are reluctant to use
these packaging systems.
This is very common with big vendors who
view their product as the single owner of the
host (e.g: Oracle).
2. You can see this disease even in companies
who are Unix vendors themselves and should
know better (just have a look at the Java RPM
Sun used to package — just the amount of
code run in post-install would make you sick).
3. IMO, the only disk-space issue an installation
mechanism should account for, is the space
for the installed program (which every
installer handle). The extra data, logs etc. are
not and should not be managed by software
installation procedures.
4. One reason for (3.) is that disk allocation
can always changed afterwards orthogonally
to the software installation. Some examples
are: Extending a logical volume, relocating
the files to a different partition and then
remounting it under the original directory,
etc.
5. Another reason, is that the size of the files
in /var (as evident from the name) depend
more on the usage scenario than on the
software being installed. E.g: sendmail
software would take the same space on my
desktop and on a 500 employees mail-server.
However, the size of /var/mail would be totally
different.
6. Even the tradeoff you presented (separation
vs. integration) is a local policy that may
change from site to site.
Those *storage management* policies should
be handled by separate tools (IMO).
my 2 agorot (cents are too low these days 😉
From the description it seems as is they are only monitoring the services, if anything. I’d like to suggest that the disk usage is monitored as well. And I’d like to suggest that you put mechanisms in place that keeps /var from growing uncontrolled. Take log rotation as an example: Given a certain usage pattern, with log rotation the logs grows to a certain size, then they are more or less fixed in size.
Quotas.
The missing feature is per-directory-quota. I’ve been looking for this feature so much, and the only close thing was Sun’s QFS/SAMFS (maybe ZFS can do that also.. it didn’t exist back then).
Several file-systems per-project is an overkill, just as using VMware VMs for separation instead of openvz or chroot.
I wish someone adds per-directory-quota support to ext3. It could be VERY useful.
Well, looks like you are looking at a couple of different problems.
How to install 3rd party software? The way I do this at our company (or at least the way we are beginning to do it now) is to create packages for the software. Since a lot of commercial applications are binary-only, this is usually a matter of just having the package copy the files to the right place. Then removing the software is easy, just uninstall the package.
Creating packages, be it .deb, .rpm, or Gentoo ebuilds is not as complicated as it may seem. You can figure it out in a day or so.
As for running out of space if you have a singular partition on your machine, I recommend just using a volume manager like LVM2. If you begin to run out of space on your volume, add some disks and grow it. Then grow the file system. File systems like XFS can even be grown online, so it with hot-swap disks it’s possible to increase the size without any downtime.
In LVM2 you can even install a bigger disk, and then migrate the data off the smaller disk and replace that one as well, removing the old disks from the system entirely.
I can’t see any Linux installation in a server world without using LVM… and of course I use it too.
I would use /srv or /home for services’ data and for users’ data. The rest of the filesystems (except /var) should be static. Having a quota per directory would be great, but it doesn’t exist for ext3. So the only method right now is to separate the file systems for application and to cope with the management overhead.
I already use RPMs to wrap commercial installation programs, but not all are as easy as you describe (install once, copy to other machines).
Well, if you could afford having a separate user for each product, you could’ve used a regular quota.
Pingback: Web 0.2 » Blog Archive » Per directory quota: not a dream
Well, trackback didn’t arrive yet, so chk out http://www.held.org.il/blog/?p=80
Pingback: Per directory quota: just not a dream | Web 0.2