* possibly add -n to dump the linenumber * maybe add optimization useing seek() in file * maybe add smarter memory management allocating the buffers only once * work out if the arithmetic holds for large values * automaticly (configure) select the best way to seed the random number generator * add error checking of streams using ferror() * maybe implement linked list contruction for arrays that might grow fast and need a lot of realloc()'s * implement sanity checks for testing program * improve docs * implement internationalisation with GNU gettext * do something with the NEWS file? * maybe move source stuff to a src directory * setup cvs server * add option for other field delimiter (other than newline) * try using temporary files with pointer into files * add options: --dbm use Berkeley DB for operation instead of using memory. This is useful when FILE is too large to fit into memory. Little cpu/memory usage but much slower -s, --skip-blank skip blank lines * options: --pick[=count] pick at random count lines (lines can be picked more than once) --count[=count] select count number of lines (lines will not be chosen more than once)