generic_file_llseek() broken?

Andreas Dilger (
Wed, 14 Nov 2001 16:51:47 -0700

I was recently testing a bit with creating very large files on ext2/ext3
(just to see if limits were what they should be). Now, I know that ext2/3
allows files just shy of 2TB right now, because of an issue with i_blocks
being in units of 512-byte sectors, instead of fs blocks.

I tried to create a (sparse!) file of 2TB size with:

dd if=/dev/zero of=/tmp/tt bs=1k count=1 seek=2047M

and it worked fine (finished immediately, don't try this with reiserfs...).

When I tried to make it just a bit bigger, with:

dd if=/dev/zero of=/tmp/tt bs=1k count=1 seek=2048M

dd fails the "llseek(fd, 2T, SEEK_SET)" with -EINVAL, and then proceeds
to loop "infinitely" reading from the file to try and manually advance
the file descriptor offset to the desired offset. That is bad.

I _think_ there is a bug in generic_file_llseek(), with it returning -EINVAL
instead of -EFBIG in the case where the offset is larger than the s_maxbytes.
AFAICS, the return -EINVAL is for the case where "whence" is invalid, not the
case where "offset" is too large for the underlying filesystem (I can see
-EINVAL for seeking to a negative position).

If I use:

dd if=/dev/zero of=/tmp/tt bs=1k count=1025 seek=2097151k

I correctly get "EFBIG (file too large)" and "SIGXFSZ" from write(2).

Does anyone know the correct LFS interpretation on this? From what I can
see (I have not read the whole thing) lseek() should return EOVERFLOW if
the resulting offset is too large to fit in the passed type. It doesn't
really say what should happen in this particular case - can someone try
on a non-Linux system and see what the result is?

Either way, I think the kernel is broken in this regard.

Cheers, Andreas

Andreas Dilger

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to More majordomo info at Please read the FAQ at