Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So let's get this right. We want to use wc -l to count how many lines there are in a file? and we're using this to benchmark the Linux VM?

The only thing I will say is; stop beating up your hard drives asking them to read the whole file and count how many lines there are in it. Use what the filesystems provide already.

stat -c%s filename

Benchmarking with wc -l is filled with problems; and this article is unfortunately has more flaws than this but I'll stop now.



The article wasn't about how to get the size of a file. It was about how linux caches large files. The article just used wc -l as a simple way of loading the entire file into memory.


Ignoring that, stat -c%s gives the size of the file in bytes, not a count of newlines.

It's always funny to see how the wrong are often so cocksure.


Actually, I'm a little baffled how someone who knows about stat has such an odd idea about how files work.

How could you possibly know how many newline characters are in a file without looking in the file?


Unless you disable this; ext filesystems normally store this and can be pulled out of 'stat' easily. Sure you can read the whole file and count every line; or you can trust what the filesystem's metadata says it is.


What version of coreutils are we talking about here? What filesystem mount options do you have?


I would think cat would do the same no? or reading the file in $lang?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: