The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
362,082 értékelés alapján az ügyfelek 4.9 / 5 csillagot adtak Web Scraping Specialists szabadúszónknak.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
362,082 értékelés alapján az ügyfelek 4.9 / 5 csillagot adtak Web Scraping Specialists szabadúszónknak.I’m preparing a targeted marketing campaign for hair-care products and need a fresh list of real, working braiders from a selection of U.S. cities that I’ll specify once we start. Your task is to dive into Instagram hashtags such as #braids, #boxbraids, #knotlessbraids (and any related tags you know work well), vet each profile for genuine, recent activity—at least one post within the past 30 days—and capture four data points: • Name (or the public display name) • City • State • Direct link to the Instagram profile Please skip anyone who looks inactive, spammy, or clearly headquartered outside the United States. A quick scroll through their feed should confirm they are taking clients and posting new work regularly. Drop everything into...
I have a working Python script that talks to the Kalshi prediction-market API, pulls live data, and fires off trades automatically through simple web-request helpers. Functionally it looks solid from my end, but I’m not a developer and would like an expert eye on it before I trust it with larger positions. The review should cover every critical angle—accuracy of the trading logic, efficiency of each call or loop, and robust error-handling so a bad response or network hiccup never leaves an order hanging. Because the script relies heavily on APIs and a small amount of web-scraping, please verify that authentication, rate-limit handling, and data parsing follow best practices and won’t put the account at risk. Deliverables • A line-by-line code review (commented or...
I'm seeking a versatile virtual assistant to join my team for 15+ hours per week. The role involves a mix of marketing and admin-related support tasks. The ideal candidate should be skilled in creating pitch decks and PowerPoint presentations, branding and design using Figma, and video editing. Additionally, the role includes web scraping, bookkeeping specific to Australia, and tasks requiring excellent written English. Key Requirements: - Proficiency in Figma for branding and design - Experience in creating engaging pitch decks and PowerPoint presentations - Video editing skills - Ability to perform web scraping tasks efficiently - Knowledge of Australian bookkeeping practices - Strong written English for various tasks Ideal Skills and Experience: - Previous experience as a virtual...
I have three specific school-website links that list all current teachers and administrators. From each page I need a clean scrape of every staff member’s name, role, email address, plus the city/town and the school name, compiled into a single Excel workbook. Alongside that, I already hold an Excel sheet that contains a roster of Tow and roadside drivers. The sheet has their names and the URLs of the companies they work for, but no contact details. Please crawl those company sites, locate each driver’s email address, and append the results to the same workbook, using matching columns so everything stays consistent. Key points to keep in mind: • Final deliverable: one Excel file ready for copy-and-paste outreach. • Source material: my three school websites and...
I need the entire contents of a specific website captured in a single pass. That means every piece of on-page text, all publicly visible image files, and every internal or external hyperlink. Once scraped, the information should be organised into a clean CSV file—one row per page—with columns for page URL, full body text, image file names, and link destinations. Please download the images themselves as well and bundle them in a separate folder (a simple ZIP is fine); the CSV should reference the exact filenames so everything lines up. I’m happy for you to use Python with BeautifulSoup, Scrapy, Selenium or whichever stack you prefer, as long as the final output meets these acceptance criteria: • Complete CSV containing text, image names, and link URLs for each ...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I have a set of voter-list PDFs released by the election commission. The layout across all files is identical, so positional parsing is reliable. Right now I simply need the current batch converted, but long-term I want a reusable Python utility that pulls the following six columns straight into Excel: • Name • FathersName • Age • Gender • VoterID • SerialNumber . Section Name . Polling Station Name .etc. Scope of work 1. Run the first extraction and hand me the .xlsx file so I can verify accuracy. 2. Package the underlying code (Python 3.x) with clear instructions and any so I can repeat the conversion on future lists without further help. Technical notes – Consistent layout means you can lean on libraries like pdfplumber, camelo...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
I am currently using apify for $1.5/1000 leads. Need things at scale - around 50k emails, this need cost effective solution. Bid on this proposal and I shall DM you, need to know cost for: 1. Apollo emails 2. Linkedin emails
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. d. "Image-Text" Pairs...
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. d. "Image-Text" Pairs...
I need a reliable scraper that monitors every basketball league listed on Bet365 () if accessing that is an issue you can use The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • ...
I need a one-time, UK-wide scrape that captures every wedding-related business you can find across England, Wales, Scotland and Northern Ireland—no single directory limitations, so feel free to pull from any public site that meets the brief. Deliverable • A single Excel file containing the following columns: URL, Business Name, Full Address, Post Code, Telephone, and every email address that appears on the site (not just the first one you find). • The sheet should be neatly de-duplicated and ready for filter/sort. Business types to include • Wedding & Bridal Wear • Wedding Planners / Services • Wedding Cars, Horse & Carriages • Wedding Venues • Photographers & Videographers • Florists & Wedding Flowers •...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to “Av...
I need a reliable script that can pull a complete database of Food & Beverage outlets in Singapore directly from Google (Maps or Search). The scope covers Restaurants, Cafes, Bars, Clubs and Bistros island-wide. For every venue scraped I must receive: • Name • Full address (street, unit, postal code) • Phone number - Type of outlet - Website - General Area Deliverable in Excel. Need 4 tabs representing each region in Singapore. Each tab, consisting of several district within the region. Please ensure: • No duplicates • Accurate field separation (e.g., address split into distinct columns) • Script runs without paid APIs Let me know your proposed method and approximate turnaround time, and feel free to highlight any previous scraping w...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a reliable scraper that monitors every basketball league listed on Bet365 (). The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull val...
I need help streamlining a small questionnaire that captures only open-ended answers. Respondents will be typing directly into a web form, and I simply want each answer stored and exported as clean, plain-text strings—no JSON, CSV, or additional metadata layers. Your task is to: • Set up the formatting logic so every submission is saved exactly as entered, preserving paragraph breaks but stripping any extra HTML or special characters the form might inject. • Provide a straightforward way for me to download or copy that text in bulk once the survey closes. If you prefer, a lightweight script or form-handler (PHP, Python, or JavaScript are all fine) that writes the responses into a flat .txt file or an equivalent plain-text store will meet the requirement. Please keep th...
I need a seasoned backend developer to design and implement a secure REST API that lets my users check award-seat availability (Avios) directly from Iberia.com. The core of the job is to automate the full search flow — login, query, filter, and return the results — while keeping the service fast and reliable. Authentication & security The service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Sel...
I'm looking for an experienced freelancer to build a complete, low-maintenance web-based educational app that uses AI to suggest peptides for anti-aging and health issues (e.g., recovery, inflammation) based on public research. The app will include study-based dosage, cycle, and usage suggestions, plus an integrated cost-comparison tool similar to (aggregating prices from legal suppliers via affiliates or scraping). This is strictly for educational purposes—**no medical advice or promotion of unapproved substances**. The app must include strong disclaimers everywhere to comply with FDA regulations. **Project Goals:** - Create a freemium SaaS web app (MVP first, then scale). - Low overhead: Use no/low-code tools where possible. - Monetization: Subscriptions ($9–$29/...
I need a seasoned backend developer to design and implement a secure REST API that lets my users check award-seat availability (Avios) directly from Iberia.com. The core of the job is to automate the full search flow — login, query, filter, and return the results — while keeping the service fast and reliable. Authentication & security The service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Sel...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
I need a small script or micro-service that calls an odds API once per day and extracts NBA player-prop markets—specifically all categories—for every nba game on the board. The job is only about player props; spreads, moneylines, and totals can be ignored. Here is what I expect: • Code (Python or Node preferred, but I’m flexible) that hits a public or paid odds endpoint, parses the daily response, and saves the three prop categories in a tidy JSON or CSV file. Excel preferably • A clear spot in the code where I can drop my own API key and set the run time (cron, Cloud Function, Lambda, etc.). • Basic logging so I can confirm the call succeeded and see any errors. • Quick README explaining setup and the output format. If the script runs co...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I’m expanding our Florida outreach list and need a reliable web-scraped data set of school, college, and university administrators who oversee Nursing or other Healthcare programs. You’ll pull the information directly from two source types only—official institution websites and reputable educational directories—so every entry must be traceable back to one of those pages. Here’s exactly what must land in the spreadsheet: • Institution name • Contact’s first and last name • Job title (Administrator, Director of Nursing, CTE Healthcare lead, etc.) • Verified email address • State (always Florida) Format & delivery – Send the file in Excel (.xlsx). – First progress drop: within 5 days so I can spot-c...
We want to do this in a consulting / facilitators / builders format in which we work with the facilitator / consultant / trainer for 3-6 hours a week for 3-6 months in order to help us collaboratively create various agents for our private equity business. The only billed time will be the time spent on the video call with our team, unless specifically approved otherwise. we want to be able to create a screen scrape tool to average certain cost items of specific real estate proejcts We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely add...
I need a single WebExtension that runs in both Chrome and Firefox and turns our current manual workflow into a one-click process. Its core job is data collection—capturing information from pages we specify—while also handling the little chores my team repeats every day: filling forms, scraping targeted fields, and kicking off routine browser actions such as page refreshes or button clicks once certain conditions are met. The add-on must connect cleanly to three parts of our internal stack: • our CRM system (REST APIs already documented) • the project-management tool we use (webhook support available) • a central database for long-term storage (PostgreSQL) Please build with the standard WebExtension/Manifest V3 approach so we can maintain a single code...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
I need a seasoned Python developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
Help wanted: daily/multi-daily comparison of supplier prices and stock levels (B2B webshop) Text: We operate a B2B webshop where business customers can place orders or commission items on request. Most of the goods are sourced directly from manufacturers. For most suppliers we have access to their stock levels and current prices; for some, no login is required, while others require login credentials. We are looking for a solution or a skilled professional who can help us retrieve supplier prices and stock levels daily — ideally multiple times per day — and compare them with our internal purchase prices so we stay up to date. No automatic syncing with our system or automatic price changes are required. It is sufficient if discrepancies between supplier prices and our system pu...
We are looking to hire an experienced freelancer for B2B contact data scraping using Apollo.io. Project Requirements Scrape contact data using Apollo filters provided by us Data must be extracted only after confirming filters are correct We will start with one state, and if the data quality is good, we will assign more states Data Fields Required Each contact must include: Full Name Job Title (Decision Makers only) Company Name Business Email (Verified) Phone Number / Mobile (where available) Company Revenue Location (City, State, Country) Company Website / LinkedIn Quality Expectations No dummy or generic emails No duplicate records Clean, structured, and fresh data Apollo-sourced data only Process We provide filters Freelancer applies filters and shares sample data ...
I need OpenClaw on my dedicated Mac with three core capabilities: Chrome automation: open websites, click elements, fill forms, extract structured snippets, and return results in WhatsApp. Coding/app workflows: generate code locally and optionally interact with web dev platforms when commanded. Deep research workflows: run multi-step web research, compare sources, and return concise findings with references. Security and reliability are mandatory: least privilege, approved-user-only WhatsApp commands, startup on boot, restart on crash, logs, and health check.
I need all data that starts from , walks through every brand, opens each handset page and captures the complete specification table exactly as shown. The end-product I expect is: • A clean JSON file data where every phone is an object containing every available field (model name, release date, dimensions, display, chipset, camera, battery—everything published on the spec sheet). Please make sure the scraper respects polite crawling rules, handles pagination and brand/model edge cases gracefully, and returns UTF-8 encoded text. If anything on the site requires minor waits or retries, can block your way. I will test JSON data and if validates proper data, the job is done.
I have a list of titles (number depends on the search results, and the last time I checked it was 250)currently tagged “In Production” on IMDbPro and I need every line item turned into a clean, ready-to-filter spreadsheet. Because IMDbPro expressly forbids scraping, each record must be gathered by hand. Here is what I expect to see, each point in its own column: • Movie Title • Director(s) • Composer(s) – if any are listed • Music Supervisor(s) • Producer(s) • Producer contact details (email and/or phone whenever they appear) • Direct URL of the movie page • Cast The workflow is straightforward: open the title, copy the details, paste them into the sheet, move on to the next film. Where information is missing on IMD...
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
I'm looking for a qualified freelancer to develop a bot that can navigate the Almaviva Egypt website just like a human would. The bot must be capable of completing three key tasks: - Filling out all necessary appointment-related information - Selecting the date and time of the appointment - Submitting the request for the appointment Considering the constraints of the website, I require a bot that can still function proficiently with a limited number of appointment slots. Moreover, it must be programmed to input login credentials. A crucial requirement is that it can bypass or solve captcha verifications, ensuring a smooth booking process. The essential skillset for this project comprises expertise in Python, as the bot should be developed in this language. Familiarity with web scra...
Hello, I am looking for a professional translator who can accurately and naturally translate Japanese content into English. The ideal candidate will have experience in translating business, technical, or creative content and can maintain the original tone and meaning while producing fluent, high-quality English text. Project Requirements: Translate Japanese text into clear, accurate, and natural English Maintain the original tone, style, and nuance of the Japanese content Ensure proper grammar, punctuation, and formatting Deliver translations on time and communicate proactively if there are any questions Qualifications: Native or near-native English proficiency Proven translation experience with samples or portfolio preferred Attention to detail and commitment to high-quality work Addi...
I need a clean, up-to-date mailing list focused exclusively on schools and daycares, camps, and churches located in my immediate area. After I award the project I will give you the exact city limits and surrounding ZIP codes to keep the search tight. For every entry I want the business or institution name, their direct email, a working phone number, and the mailing address. Accuracy matters more than volume—please verify that each record is current and remove any duplicates you find along the way. The finished file should arrive as an Excel or Google Sheet that I can sort and filter easily that i can easily create mailing labels from. If you already use tools such as LinkedIn Sales Navigator, Apollo, Hunter, or a similar scraper/validation service, let me know; anything that help...
I need a reliable way to pull data from Facebook Marketplace seller pages at scale. The target platform is Facebook; other marketplaces such as eBay, Amazon or Etsy are irrelevant for this job. Here’s what I’m after: when I paste one or many seller profile URLs into your script or small desktop app, it should crawl every public listing on those pages and export the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, ...
I am looking for an experienced developer with strong expertise in Python and web automation to build a smart system for monitoring ticket availability and event updates on the Webook platform. The system should focus on automation, notifications, and usability while following best technical and compliance practices. Scope of Work • Develop a Python-based automation system to monitor events and ticket availability. • Send real-time notifications when: • New events are published • New ticket batches become available • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to f...
For an upcoming market research study, I need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow fi...
Necesito que tomes más de 200 productos que actualmente aparecen en la web de mis proveedores (todo el contenido está en formato texto e imágenes) y los publiques correctamente en mi tienda Shopify. También que elimines los que están descontinuados. Alcance del trabajo • Copiar nombre, descripción, precio, variantes y atributos clave de cada producto. • Descargar y subir las imágenes en alta calidad, asociándolas al producto correspondiente. • Crear/ajustar colecciones, etiquetas y metadatos para facilitar la navegación y el SEO interno de Shopify. • Verificar que cada ficha quede con inventario, SKU y opciones de envío configuradas. • Mantener la coherencia visual y de formato entre to...
I have a growing list of company names, and I need a small, reliable Python script that can: Automatically find each company’s career/jobs page where open positions are posted (pages may be built using HTML, JavaScript, or modern front-end frameworks) Navigate through all job listings, including: Pagination (page numbers, next/previous, etc.) “Load more” buttons Infinite scrolling Ability to fetch data from multiple pages (e.g., page 3, 4, or beyond) Apply job filters, especially location-based filtering, so that only job links for specific locations are collected Extract only individual job posting links after filters are applied Visit each job link and scrape complete job details, including: Job title Job description Location Employment type (if available) Department / ...
I need help to make my catalogue of automotive spare parts by pairing every OEM number I supply with a clean, high-resolution product photo and basic part information. The scope covers the full range of engine, suspension and brake system components, so you’ll be digging through manufacturer websites and trustworthy e-commerce listings until you find an image that is crisp, watermark-free and matches the exact OEM reference. Once you locate a match, capture the part name exactly as it appears on the source page, copy the product-page link, download the image at its highest available resolution, and note everything in a structured Google Sheet. File naming should mirror the OEM numbers so that images and rows line up perfectly. Deliverables • A Google Sheet containing OEM num...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.