Implement a bash shell script findlinks that writes all hyperlinks contained in a certain web page to standard output. By hyperlinks we mean URLs that take the following format:

http://<hostname>.<domainname>/<path>/<file>

with

When generating the output, the optional final slash should be removed, and every hyperlink should occur once at most. A possible output of the shell script findlinks could then be:

$ findlinks http://helpdesk.ugent.be
http://helpdesk.ugent.be/extra/news.php
http://helpdesk.ugent.be/rss.xml
http://lib1.ugent.be/cmsites/default.aspx
http://www.lib.ugent.be
http://www.opleidingen.ugent.be/studiekiezer/nl/index.htm
http://www.ugent.be
http://www.ugent.be/favicon.ico
http://www.ugent.be/phonebook
http://www.ugent.be/portal/nl/CA60.htm
http://www.w3.org/1999/xhtml

Try to make sure that ill-formed URLs (e.g. http://.ugent.be or http://be) are ignored. Use the findlinks shell script as the basis for another bash shell script domaincounts that counts the number of times a given domain name occurs in the hyperlinks of a given web page, e.g.:

$ domaincounts http://helpdesk.ugent.be
9 ugent.be
1 w3.org