Unix inode question --SUMMARY (4 answers)-- THANKS ALL (fwd)
mike at boobaz.net
Tue Feb 27 23:50:01 PST 2001
Excellent responses below to a question about depleting i-nodes on a
-=<(| mike at boobaz.net |)>=-
---------- Forwarded message ----------
Date: Tue, 27 Feb 2001 12:19:22 -0800
Subject: Re: Unix inode question --SUMMARY (4 answers)-- THANKS ALL
---------Thanks Dave D:
Most Unix file systems have fixed length i-node tables. You can tell
how many are left with "df -i". The risk of a corrupt block in the
middle of an 8000 entry directory is as high as corruption of any
other block, so backups of the files is the best way to recover if you
do have a problem. More likely is high-latency of processing file
changes in a directory that large, which is likely using double (and
perhaps triple) indirect blocks for storing the directory itself. See
the article by Wietse Venema on i-nodes and file system allocation on
my home page.
---------Thanks (and Hi!) Cere:
In answer to you inode question the inode limits are filesystem type - plus
how you set up your filesystem - specific. Do a man mkfs or mke2fs if on
a recently made linux box. In any case, the inode limit is more
come about on a filesystem bases rather than on a directory level. I
sure there is some stucture size limit for the directory but I suspect
that it is at least as large if not larger than the filesystem inode
limit. Sorry I can't be specificly more helpful but you didn't give any
specific OS/filesystem info...
I don't know about a crash, but you're risking poor performance.
The unix filesystem is notoriously bad as a database package. All
searches of the directory are done sequentially. If the directory
you're talking about gets updated frequently you'll also suffer
from locking that needs to be done to create a new entry -- all
searches of the directory are suspended while the updating process
scans through the directory looking for an empty slot and rewrites
that disk block. Every getwd() (get working directory) call will
have to do a stat() call on 50% (on average) of the directories
in each parent directory as it tries to figure out where it is.
Doing 4000 stat()s must take awhile (actually, you're probably
spending most of your time in the newer directories so you'll
be doing 8000 stat()s more often than not). Everytime you start
a new [sub]shell it does a getwd() as do many other commands.
Of course if you're not cd'd into a subdirectory of the monster
directory then you're searching it on every file open so you
lose either way.
You'll probably get an ENOSPC (no space on device) type error
when you try to create a new entry. Your df command should tell
you how close you are to running out of inodes. On linux you
can use "df -i".
---------And finally, Thanks Steven:
Most typically, an overly large directory will cause performance
degradation on accesses and updates to the filesytem. Modifying or
searching a large directory takes more I/O and searching a large directory
takes more CPU. In my experience, a more heavily utilized disk is more
likely to fail than a less utilized disk, but it's not so significant when
added to failures caused by excessive heat (dead fan) or power-supplies.
If you are actively modifying a large directory, you increase your
exposure to filesystem damage when there is a system interruption (crash).
Also, if you have many processes actively attempting to update a single
directory, you can also run into significant performance degredation due
to filesystem locking.
Typically, a Unix filesystem will not allocate additional inode space as
needed. Your filesystem capabilities will vary depending on the specific
Unix filesystem (E.G. bsd,jfs,ext2fs,advfs) you have. The "df -i" command
will show the available free blocks and free inodes. Typically a program
failure for failing to create a file (out of inodes) is handled about as
well as a failure due to lack of filesystem space (out of blocks, out of
disk quota). The typical Unix disk quota implimentation sets limits both
on blocks and inodes.
More information about the Linux