mirror of
https://github.com/Stichting-MINIX-Research-Foundation/pkgsrc-ng.git
synced 2025-08-03 17:59:07 -04:00
25 lines
1.4 KiB
Plaintext
25 lines
1.4 KiB
Plaintext
Suppose you're running low on disk space. You need to free some up, by finding
|
|
something that's a waste of space and deleting it (or moving it to an archive
|
|
medium). How do you find the right stuff to delete, that saves you the maximum
|
|
space at the cost of minimum inconvenience?
|
|
|
|
Unix provides the standard du utility, which scans your disk and tells you which
|
|
directories contain the largest amounts of data. That can help you narrow your
|
|
search to the things most worth deleting.
|
|
|
|
However, that only tells you what's big. What you really want to know is what's
|
|
too big. By itself, du won't let you distinguish between data that's big because
|
|
you're doing something that needs it to be big, and data that's big because you
|
|
unpacked it once and forgot about it.
|
|
|
|
Most Unix file systems, in their default mode, helpfully record when a file was
|
|
last accessed. Not just when it was written or modified, but when it was even
|
|
read. So if you generated a large amount of data years ago, forgot to clean it
|
|
up, and have never used it since, then it ought in principle to be possible to
|
|
use those last-access time stamps to tell the difference between that and a
|
|
large amount of data you're still using regularly.
|
|
|
|
agedu does same disk scan as du, but also records the last-access times of
|
|
everything. Then it builds an index that lets it efficiently generate reports
|
|
giving a summary of the results for each subdirectory.
|