Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

My big idea: Ancient Brain


CA114      CA170

CA668      CA669      Projects


Contiguous file allocation

Files all in one unbroken sequence on the physical disk.

Problems like with contiguous memory allocation.
What happens if file grows? Has to be rewritten into larger slot. Takes ages.

Non-contiguous allocation

Like with paging in memory, disk is divided into blocks.

File is held in a collection of blocks scattered over the disk.
If file needs more blocks, it can take them from anywhere on disk that blocks are free.

Index of where the blocks of the file are

Like pages in memory, blocks can "flow" like liquid into slots around the disk. Don't all need to be in the same place.
Need some index of where the bits of the file are.
Various indexing systems using linked lists or tables:


Demonstration of fragmentation (files split into multiple parts).
From below.

Shell script to see blocks allocated to files

# compare actual file size with blocks used 

for i in *
 ls -ld $i
 du -h $i

Results like:

-rwxr-xr-x 1 me mygroup 857 Jun 27  2000 synch
1.0K    synch

-rwxr--r-- 1 me mygroup 1202 Oct 25  2013 profile
2.0K    profile

-rwxr-xr-x 1 me mygroup 1636 Oct 28  2009 yo.java
2.0K    yo.java

-rwxr--r-- 1 me mygroup 2089 Oct  8 00:03 flashsince
3.0K    flashsince

-rwxr-xr-x 1 me mygroup 9308 Oct 19  2010 yo.safe
10K     yo.safe


An extreme experiment to demonstrate wasted space ("slack space") in file systems.
This person makes 100,000 files of 5 bytes each.
This is only 500 k of actual data.
But it needs 400 M of disk space to store it.
From here.

Contiguous file allocation is good where possible

Unlike in memory, where contiguous allocation is dead, in files it has made a bit of a comeback.

The reason is that there is a lot of disk space and it is cheap, but the speed of read/writing disk has not increased so much - indeed it has got far worse relative to CPU speed.

To write a contiguous file to disk you don't have to jump the disk head around the disk. You just keep writing after the last location you wrote - the head is in the correct position.

So modern systems try to use contiguous allocation for small files, only use non-contiguous for large files. To achieve this, they may allocate more space than possibly needed. Some wasted disk space is worth it if it makes disk I/O a lot faster.


Demonstration of fragmentation (files split into multiple parts) and defragmentation (reducing such splitting and making files contiguous).
From here.

Cache blocks in RAM for speed

Also to speed things up: OS caches recently-accessed blocks in memory in case needed again. Avoid another disk I/O.

RAM drive

ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Wikipedia: Sometimes I link to Wikipedia. I have written something In defence of Wikipedia. It is often a useful starting point but you cannot trust it. Linking to it is like linking to a Google search. A starting point, not a destination. I automatically highlight in red all links to Wikipedia and Google search and other possibly-unreliable user-generated content.