Skip to main content

Navigating LinkedIn Data Collection with ScrapingBee - A Proxy Rotation Guide

· 3 min read
Bakar Tavadze

Navigating LinkedIn Data Collection with ScrapingBee: A Proxy Rotation Guide

Have you ever tried scraping LinkedIn for professional insights, only to hit a wall with IP bans and rate limits? LinkedIn is notoriously tough to scrape, thanks to its robust anti-scraping measures. Today, we're diving into a smarter approach to scraping LinkedIn without getting your IP address banned. We'll leverage ScrapingBee, a powerful web scraping API that handles proxy rotation, headless browsers, and even JavaScript rendering for us. So, let's get to it and unlock the power of LinkedIn data, responsibly and efficiently.

The Strategy

Our game plan is straightforward:

  1. Use ScrapingBee to manage the complexities of web scraping, including proxy rotation, which is crucial for avoiding IP bans on LinkedIn.
  2. Scrape LinkedIn for specific data points, such as company information, job postings, or professional profiles, while respecting LinkedIn's Terms of Service.

Here's what you need to get started.

Prepping Your Toolkit

Before we begin, you'll need:

  1. A ScrapingBee account, equipped with API credits and proxy rotation capabilities.
  2. Basic Python knowledge, for scripting our scraping adventure.
  3. An understanding of what you're looking to scrape on LinkedIn and why (always scrape ethically!).

The Script

Here's a Python script example that demonstrates how to use ScrapingBee to scrape LinkedIn. This script is purely educational. Always ensure your scraping activities comply with LinkedIn's Terms of Service and ScrapingBee's use policies.

import os
import requests

# ScrapingBee API key

# The LinkedIn URL you wish to scrape
linkedin_url = ''

# ScrapingBee parameters
params = {
'api_key': API_KEY,
'url': linkedin_url,
'render_js': 'true', # LinkedIn requires JS rendering
'proxy_rotate': 'true', # Enable proxy rotation

def scrape_linkedin():
response = requests.get('', params=params)
if response.status_code == 200:
# Process your scraped data here
print(f"Failed to retrieve data: {response.status_code}")

def main():

Breaking Down the Script

ScrapingBee Setup: We're sending a GET request to ScrapingBee's API, passing in our target URL along with options to render JavaScript and rotate proxies.

LinkedIn URL: Replace '' with the actual LinkedIn page you aim to scrape.

Data Processing: The script currently just prints the HTML response, but you can extend this to parse and extract specific data points using libraries like BeautifulSoup.

Setting Up the Bot on BotFleet

Head over to BotFleet and go to the bots section. Name it something memorable, like "LinkedIn Scraper".

Indicating script

Copy the Python script we just wrote into the script section.

Indicating requirements

We're using the requests library. Let's add this to our requirements:


Indicating environment variables

Set SCRAPINGBEE_API_KEYas environment variable:


Schedule It

Set your bot to run every hour. You can use the following cron expression to run your bot every hour:

0 * * * *

Following rules

Even with proxy rotation, it's crucial to respect rate limits to avoid overloading LinkedIn's servers. Ensure your scraping activities comply with LinkedIn's Terms of Service and are ethically sound.

Wrapping Up

Congratulations! You've just stepped up your web scraping game with ScrapingBee's proxy rotation capabilities, opening up new possibilities for LinkedIn data collection. Whether you're gathering market research, tracking job trends, or analyzing professional networks, doing so smartly and responsibly is key.

Got questions, tips, or success stories with your LinkedIn scraping projects? Reach out.