Practical Implementations
Real-World Web Search Scenarios
1. Academic Research Crawler
import requests
from bs4 import BeautifulSoup
import pandas as pd
def academic_search(keywords, num_results=10):
base_url = "https://scholar.google.com/scholar"
params = {"q": keywords, "hl": "en"}
results = []
response = requests.get(base_url, params=params)
soup = BeautifulSoup(response.text, 'html.parser')
for result in soup.find_all('div', class_='gs_ri')[:num_results]:
title = result.find('h3', class_='gs_rt').text
abstract = result.find('div', class_='gs_rs').text
results.append({
'title': title,
'abstract': abstract
})
return pd.DataFrame(results)
Search Implementation Strategies
def compare_product_prices(product_name):
search_engines = {
'Amazon': f"https://www.amazon.com/s?k={product_name}",
'eBay': f"https://www.ebay.com/sch/i.html?_nkw={product_name}"
}
price_comparisons = {}
for platform, url in search_engines.items():
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
prices = soup.find_all('span', class_='price')
price_comparisons[platform] = [float(p.text.replace('$', '')) for p in prices[:5]]
return price_comparisons
Search Workflow Visualization
graph TD
A[Search Query] --> B[Select Sources]
B --> C[Send Requests]
C --> D[Parse Results]
D --> E[Extract Data]
E --> F[Analyze/Process]
F --> G[Present Findings]
Advanced Search Techniques
def aggregate_search_results(query):
sources = [
{'name': 'Wikipedia', 'url': f"https://en.wikipedia.org/w/index.php?search={query}"},
{'name': 'News', 'url': f"https://news.google.com/search?q={query}"}
]
aggregated_results = {}
for source in sources:
response = requests.get(source['url'])
soup = BeautifulSoup(response.text, 'html.parser')
results = soup.find_all('div', class_='result')
aggregated_results[source['name']] = [
result.text for result in results[:3]
]
return aggregated_results
Search Implementation Comparison
Technique |
Complexity |
Use Case |
Performance |
Basic Requests |
Low |
Simple searches |
Fast |
BeautifulSoup Parsing |
Medium |
Structured data |
Moderate |
Multi-Source Aggregation |
High |
Comprehensive research |
Slower |
Error Handling and Robustness
def robust_search(query, max_retries=3):
for attempt in range(max_retries):
try:
results = perform_search(query)
return results
except requests.RequestException as e:
print(f"Search attempt {attempt + 1} failed: {e}")
time.sleep(2) ## Wait before retry
return None
Best Practices for LabEx Developers
- Implement comprehensive error handling
- Use rate limiting
- Cache search results
- Respect website terms of service
By mastering these practical implementations, developers can create sophisticated web search solutions that extract valuable information efficiently and ethically.