blob: ffa5ef3d9b930fc4628c15336434a09780794e74 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
before next release
-------------------
* go over all FIXMEs in code
* parse css (see debian bugs?)
* make sure all characters in urls are properly url encoded (and make it an error in the checked page)
* close streams properly when not downloading files
probably before 2.0 release
---------------------------
* maybe choose a different license for webcheck.css
* make it possible to copy or reference webcheck.css
* make it possible to copy http:.../webcheck.css into place (maybe use scheme system, probably just urllib)
* make more things configurable
* make a Debian package
* maybe generate a list of page parents (this is useful to list proper parent links for problem pages)
* figure out if we need parents and pageparents
* make configurable time-out when retrieving a document
* support for mult-threading (maybe)
* divide problems in transfer problems and page problems (transfer problems result in a bad link problem on a page)
* clean up printing of messages, especially needed for multi-threading
* go over command line options and see if we need long equivalents
* implement a fix for redirecting stdout and stderr to work properly
* put a maximum transfer size for downloading files and things over http
wishlist
--------
* make code for stripping last part of a url (e.g. foo/index.html -> foo/)
* translate file paths to file:/// urls on the command line
* maybe set referer (configurable)
* support for authenticating proxies
* new config file format (if we want a configfile at all)
* support ftp proxies
* cookies support (maybe)
* integration with weblint
* combine with a logfile checker to also show number of hits per link
* write a guide to writing plugins
* do form checking
* do spelling checking
* test w3c conformance of pages (already done a little)
* maybe make broken links not clickable in report
* maybe store crawled site's data in some format for later processing or continuing after interruption
* create output directory if it does not exist
* add support for fetching gzipped content to improve performance
* maybe do http pipelining
* add a favicon to reports
* add a test to see if python supports https and fail elegantly otherwise
* maybe follow redirects of external links
* make error handling of html parser more robust (maybe send a patch for html parser upstream)
* improve tooltips
* maybe use this as a html parser: http://www.crummy.com/software/BeautifulSoup/examples.html
* maybe have a way to output google sitemap files: http://www.google.com/webmasters/sitemaps/docs/en/protocol.html
|