We need three Perl functions to scrap three different pages of a specific website. Bellow is a summary of the required functionality: • There must be an option to use or not the [login to view URL] file in all functions. • All functions will return a hash with the data read as well as error codes. • Some data will have to be translated before it is returned by the function. All translation data universes will be provided. • All three functions will robust enough to handle changes on the webpage structure that turn them “unreadable?? with error codes. • Some pages will contain a considerable amount of data and will be broken down in sub-pages. The function will have to read all related pages (using previous/next links) in order to gather the data. We will provide documentation for the scraping process to the best of our knowledge; however some unexpected situations may arise (as it may in any project) and will have to be handled accordingly. Before we select the winning bid all prospect developers will receive the full requirements for review. Please provide a sample code (preferably a screen scraper you’ve worked on before) for evaluation. Thank you and best regards.
## Deliverables
1) Complete and fully-functional source code of all work done. 2) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc.).
## Platform
Platform independent.