Friday, 29 May 2015

Data Scraping Services - Web Scraping Video Tutorial Collection for All Programming Language

Web scraping is a mechanism in which request made to website URL to get  HTML Document text and that text then parsed to extract data from the HTML codes.  Website scraping for data is a generalize approach and can be implemented in any programming language like PHP, Java, C#, Python and many other.

There are many Web scraping software available in market using which you can extract data with no coding knowledge. In many case the scraping doesn’t help due to custom crawling flow for data scraping and in that case you have to make your own web scraping application in one of the programming language you know. In this post I have collected scraping video tutorials for all programming language.

I mostly familiar with web scraping using PHP, C# and some other scraping tools and providing web scraping service.  If you have any scraping requirement send me your requirements and I will get back with sample data scrape and best price.

 Web Scraping Using PHP

You can do web scraping in PHP using CURL library and Simple HTML DOM parsing library.  PHP function file_get_content() can also be useful for making web request. One drawback of scraping using PHP is it can’t parse JavaScript so ajax based scraping can’t be possible using PHP.

Web Scraping Using C#

There are many library available in .Net for HTML parsing and data scraping. I have used Web Browser control and HTML Agility Pack for data extraction in .Net using C#

I have didn’t done web scraping in Java, PERL and Python. I had learned web scraping in node.js using Casper.JS and Phantom.JS library. But I thought below tutorial will be helpful for some one who are Java and Python based.

Web Scraping Using Jsoup in Java

Scraping Stock Data Using Python

Develop Web Crawler Using PERL

Web Scraping Using Node.Js

If you find any other good web scraping video tutorial then you can share the link in comment so other readesr get benefit form that.

Source: http://webdata-scraping.com/web-scraping-video-tutorial-collection-programming-language/

Tuesday, 26 May 2015

Web Scraping Services - Extracting Business Data You Need

Would you like to have someone collect, extract, find or scrap contact details, stats, list, extract data, or information from websites, online stores, directories, and more?

"Hi-Tech BPO Services offers 100% risk-free, quick, accurate and affordable web scraping, data scraping, screen scraping, data collection, data extraction, and website scraping services to worldwide organizations ranging from medium-sized business firms to Fortune 500 companies."

At Hi-Tech BPO Services we are helping global businesses build their own database, mailing list, generate leads, and get access to vast resources of unstructured data available on World Wide Web.

We scrape data from various sources such as websites, blogs, podcasts, and online directories; and convert them into structured formats such as excel, csv, access, text, My SQL using automated and manual scraping technologies. Through our web data scraping services, we crawl through websites and gather sales leads, competitor’s product details, new offers, pricing methodologies, and various other types of information from the web.

Our web scraping services scrape data such as name, email, phone number, address, country, state, city, product, and pricing details among others.

Areas of Expertise in Web Scraping:

•    Contact Details
•    Statistics data from websites
•    Classifieds
•    Real estate portals
•    Social networking sites
•    Government portals
•    Entertainment sites
•    Auction portals
•    Business directories
•    Job portals
•    Email ids and Profiles
•    URLs in an excel spreadsheet
•    Market place portals
•    Search engine and SEO
•    Accessories portals
•    News portals
•    Online shopping portals
•    Hotels and restaurant
•    Event portals
•    Lead generation

Industries we Serve:

Our web scraping services are suitable for industries including real estate, information technology, university, hospital, medicine, property, restaurant, hotels, banking, finance, insurance, media/entertainment, automobiles, marketing, human resources, manufacturing, healthcare, academics, travel, telecommunication and many more.

Why Hi-Tech BPO Services for Web Scraping?

•    Skilled and committed scraping experts
•    Accurate solutions
•    Highly cost-effective pricing strategies
•    Presence of satisfied clients worldwide
•    Using latest and effectual web scraping technologies
•    Ensures timely delivery
•    Round the clock customer support and technical assistance

Get Quick Cost and Time Estimate

Source: http://www.hitechbposervices.com/web-scraping.php

Monday, 25 May 2015

Which language is the most flexible for scraping websites?

3 down vote favorite

I'm new to programming. I know a little python and a little objective c, and I've been going through tutorials for each. Then it occurred to me, I need to know which language is more flexible (python, obj c, something else) for screen scraping a website for content.

What do I mean by "flexible"?

Well, ideally, I need something that will be easy to refactor and tweak for similar projects. I'm trying to avoid doing a lot of re-writing (well, re-coding) if I wanted to switch some of the variables in the program (i.e., the website to be scraped, the content to fetch, etc).

Anyways, if you could please give me your opinion, that would be great. Oh, and if you know any existing frameworks for the language you recommend, please share. (I know a little about Selenium and BeautifulSoup for python already).

4 Answers

I recently wrote a relatively complex web scraper to harvest a TON of data. It had to do some relatively complex parsing, I needed it to stuff it into a database, etc. I'm C# programmer now and formerly a Perl guy.

I wrote my original scraper using Python. I started on a Thursday and by Sunday morning I was harvesting over about a million scores from a show horse site. I used Python and SQLlite because they were fast.

HOWEVER, as I started putting together programs to regularly keep the data updated and to populate the SQL Server that would backend my MVC3 application, I kept hitting snags and gaps in my Python knowledge.

In the end, I completely rewrote the scraper/parser in C# using the HtmlAgilityPack and it works better than before (and just about as fast).

Because I KNEW THE LANGUAGE and the environment so much better I was able to add better database support, better logging, better error handling, etc. etc.

So... short answer.. Python was the fastest to market with a "good enough for now" solution, but the language I know best (C#) was the best long-term solution.

EDIT: I used BeautifulSoup for my original crawler written in Python.

5 down vote

The most flexible is the one that you're most familiar with.

Personally, I use Python for almost all of my utilities. For scraping, I find that its functionality specific to parsing and string manipulation requires little code, is fast and there are a ton of examples out there (strong community). Chances are that someone's already written whatever you're trying to do already, or there's at least something along the same lines that needs very little refactoring.

1 down vote

I think its safe to say that Python is a better place to start than Objective C. Honestly, just about any language meets the "flexible" requirement. All you need is well thought out configuration parameters. Also, a dynamic language like Python can go a long way in increasing flexibility, provided that you account for runtime type errors.

1 down vote

I recently wrote a very simple web-scraper; I chose Common Lisp as I'm learning the language.

On the basis of my experience - both of the language and the availability of help from experienced Lispers - I recommend investigating Common Lisp for your purpose.

There are excellent XML-parsing libraries available for CL, as well as libraries for parsing invalid HTML, which you'll need unless the sites you're parsing consist solely of valid XHTML.

Also, Common Lisp is a good language in which to implement DSLs; a DSL for web-scraping may be a solution to your requirement for flexibility & re-use.

Source: http://programmers.stackexchange.com/questions/74998/which-language-is-the-most-flexible-for-scraping-websites/75006#75006


Friday, 22 May 2015

How to prevent getting blacklisted while scraping

Crawlers can retrieve data much quicker and in greater depth than human searchers, so bad scraping practices can have some impact on the performance of the site.

Needless to say, if a single crawler is performing multiple requests per second and/or downloading large files, a under powered server would have a hard time keeping up with requests from multiple crawlers.

Since spiders don’t bring direct organic traffic and seemingly affect the performance of the site, most site admins hate spiders and do their best to prevent them.

Lets go through how websites detect and block spiders and also know the techniques to overcome those barriers.

Most websites don’t have anti scraping mechanisms since it would affect the user experience, but some sites do not believe in open data access.

Before going through this article always keep in mind that

    A GOOD SPIDER MUST OBEY A WEBSITE’S CRAWLING POLICIES.

HOW DOES DETECTING ‘SPIDER ACTIVITY’ WORK?

A web server can use different mechanisms to detect a spider from a normal user. Here are some methods used by a site to detect a spider:

•    Unusual traffic/high download rate especially from a single client/or IP address within a short time span raises a bot alert.

•    Repetitive tasks done on website based on an assumption that a human user won’t perform the same repetitive tasks all the time.

•    The site has honeypot traps inside their pages, these honeypots are usually links which aren’t visible to a normal user but only to a spider . When a scraper/spider tries to access the link, the alarms are tripped.

Spend some time and investigate the anti-scraping mechanisms used by a site and build the spider accordingly, it will provide a better outcome in the long run and increase the longevity and robustness of your work.

EASIEST WAY TO FIND IF A SITE HATES BOTS

Check the robots.txt file if it contains line like these, It means the site doesn’t like bots. However, since most sites want to be on Google (arguably the largest scraper of websites globally ;-)) they do allow access to bots and spiders.

User-agent: *
Disallow: /

This line is for preventing well-behaved bots or the bots which respect robots.txt.

Another way is CAPTCHAs irritating presence in the sites other than in authentication page.

WHAT HAPPENS WHEN YOU GET BANNED

There are two ways to ban a webspider, either by banning all accesses from a particular IP or by banning all accesses that use a specific id to access the server (most browsers and web spiders identify themselves whenever they request a page by user agents. Chrome browser for example uses Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.149 Safari/537.36

The banning can be temporary or permanent. Temporary blocks can last minutes or hours.

HOW DO WE KNOW A SITE HAS BLOCKED US?

If any of the following symptoms appear on the site that you are crawling, it is a sign of being blocked or banned.

•    Showing CAPTCHA pages
•    Unusual content delivery delay
•    Frequent response with 404,301,500 errors,

also frequent appearance of these status codes are also indication of blocking.

•    401 Unauthorized
•    403 Forbidden
•    404 Not Found
•    408 Request Timeout
•    429 Too Many Requests

WEB CRAWLING BEST PRACTICES

These are the best practices we can follow to overcome the detection.

1. MAKE CRAWLING SLOWER, DO NOT DDoS THE SERVER, TREAT THEM NICELY

Use auto throttling mechanisms, which will automatically throttle crawling speed based on the load on both spider and the website, you are crawling and also adjust the spider to optimum crawling speed. The faster you crawl, the worse it is for everyone.

Put some random sleeps in between requests, Add some delays after crawled number of pages. Choose the lowest number of concurrent requests possible. These techniques make the spider looks like a human being.

2. DISGUISE YOUR REQUESTS BY ROTATING IP/PROXY

A server can easily detects a bot by checking the requests from a single IP address, So we use different IPs for making request to a server and detection rate become lesser. Make a pool of IPs that you can use and use random ones for each request.

There are several methods can be used to change the IP. Services like VPN ,shared proxies, TOR can help  and some third parties are also provides services for IP rotation.

3. USER-AGENT SPOOFING

Since every request made from a client end contains a user-agent header ,Using the same useragent multiple times leads to the detection of a bot. User agent spoofing is the best solution for this. Spoof the User agent by making a list of user agents and pick a random one for each request.

Websites do not want to block genuine users so you should try to look like one. Set your user-agent to a common web browser instead of using the library default (such as wget/version or urllib/version). You could even pretend to be the Google Bot: Googlebot/2.1; (http://www.google.com/bot.html)

You can check your user-agent string here:

http://www.whatsmyuseragent.com/

A good user-agent string list can be found here:

http://www.useragentstring.com/pages/useragentstring.php

4. BE AWARE OF HONEYPOTS

Some site designers put honeypot traps inside websites to detect web spiders, They may be links that normal user can’t see and a spider can.

When following links always take care that the link has proper visibility with no nofollow tag. Some honeypot links to detect spiders will be have the CSS style display:none or will be color disguised to blend in with the page’s background color.

5. DO NOT ALWAYS FOLLOW THE SAME CRAWLING PATTERN

Only robots follow the same crawling pattern,Sites that have intelligent anti-crawling mechanisms can easily detect spiders from finding pattern in their actions. Humans wont perform repetitive tasks a lot of times. Incorporate some random clicks on the page, mouse movements and random actions that will make a spider looks like a human client.

6. ALWAYS RESPECT THE robots.txt

All web spiders are supposed to follow rules that you place in a robots.txt file in a website, such as how frequently they are allowed to request pages, and from what directories they are allowed to crawl through. They should also be supplying a consistent valid User-Agent string that identifies the requests as a bot request.

Source: http://learn.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/

Wednesday, 20 May 2015

Hard-Scraped Hardwood Flooring: Restoration of History

Throughout History hardwood flooring has undergone dramatic changes from the meticulous hard-scraped hardwood polished floors of majestic plantations of the Deep South, to modern day technology providing maintenance free wood flooring designed for comfort and appearance. The hand-scraped hardwood floors of the South, depicted charm with old rustic nature and character that was often associated with this time era. To date, hand-scraped hardwood flooring is being revitalized and used in up-scale homes and places of businesses to restore the old country charm that once faded into oblivion.

As the name implies, hand-scraped flooring involves the retexturing the top layer of flooring material by various methods in an attempts to mimic the rustic appearance of flooring in yesteryears. Depending on the degree of texture required, hand scraping hardwood material is often accomplished by highly skilled craftsmen with specialized tools and years of experience perfecting this procedure. When properly done, hand-scraped hardwood floors add texture, richness and uniqueness not offered in any similar hardwood flooring product.

Rooted with history, these types of floors are available in finished or unfinished surfaces. The majority of the individuals selecting hand-scraped hardwood flooring elect a prefinished floor to reduce costs per square foot in installation and finishing labor charges, allowing for budget guidelines to bend, not break. As expected, hand-scraped flooring is expensive and depending on the grade and finish selected, can range from $15-40$ per square foot and beyond for material only. Preparation of the material is labor intensive adding to the overall cost per square foot dramatically. Recommended professional installation can and often does increase the cost per square foot as well, placing this method of hardwood flooring well out of reach of the average hardwood floor purchaser.

With numerous selections of hand-scraped finishes available, each finish is designed to bring out a different appearance making it a one-of-a-kind work of art. These numerous finish selections include:

• Time worn aged, dark coloring stain application bringing out grain characteristics

• Wire brushed, providing a highlighted "grainy" effect with obvious rough texture

• Hand sculpted, smoother distressed uniform appearance

• French Bleed, staining of edges and side joints with a much darker stain to give a bleeding effect to the wood

• Hand Hewn or Rough Sawn, with visible and noticeable saw marks

Regardless of the selection made, scraped flooring cannot be compared to any other available flooring material based on durability, strength and visual appearance. Limited by only the imagination and creativity, several wood species can be used to create unusual floor patterns, highlighting main focal points of personal libraries and art collections.

The precise process utilized in the creation of scraped floors projects a custom look with deep color and subtle warm highlights. With radiant natural light reflecting off this type of floor, the effect of beauty and depth is radiated in a fashion that fills the room with solitude and serenity encompassing all that enter. Hand-scraped hardwood floors speak of the past, a time of decent, a time or war and ambiguity towards other races and the blood- shed so that all men could be treated as equals. More than exquisite flooring, hand-scraped hardwood flooring is the restoration of History.

Source: http://ezinearticles.com/?Hard-Scraped-Hardwood-Flooring:-Restoration-of-History&id=6333218

Sunday, 17 May 2015

Introducing ScrapeShield: Discover, Defend & Deter Content Scraping

If you're a publisher, whether an individual blogger or major media outlet, you've undoubtedly experienced content scraping. Searching the web for an article you've published or other original content you've created and you find it copied and republished on some other random website. Often the site will be full of ads. And, sometimes, it will even rank higher in search results than your original work.

While you may envision an army of individuals copying and pasting your content on their sites, the truth is content scraping is typically an automated process with bots that grab original content and then republish it without human intervention onto link farm sites. CloudFlare has blocked many of these bots automatically in the past, but we decided it was time to do something to more actively stop them.

Introducing ScrapeShield

ScrapeShield is an app created by the CloudFlare team. It incorporates several existing CloudFlare features like email obfuscation and hotlink protection that serve to protect from content scraping and adds a number of new features as well. Because we believe every publisher of original content should be able to understand and control how their work is used, we're providing ScrapeShield free for every CloudFlare user.

Detect, Defend & Deter

ScrapeShield has different elements to help you detect when your content is scraped, defend your site against content scrapers, and even deter content scrapers from targeting you in the first place. If you enable ScrapeShield, CloudFlare will automatically insert invisible tracking beacons in your content. When automated bots scrape your content, they pull the beacons along with them. CloudFlare detects these beacons when they ping from sites that aren't your own. You can access your ScrapeShield control panel to see where your content is being republished. Not only is this useful in showing scraping, but you can also see users who are reading your content through proxy services like Flipboard or Pulse.

The data from the content beacons is fed back into CloudFlare's protection system. As CloudFlare identifies content scraping bots, we automatically prevent them from accessing your site. Just like Project Honey Pot, the original inspiration for CloudFlare, used traps to detect when spammers were harvesting email addresses, CloudFlare now uses data from ScrapeShield to identify content scrapers and keep them off publishers' sites.

Maze

We didn't want to just stop scrapers from attacking sites on CloudFlare, we also wanted to tie up their resources so they couldn't harm the rest of the web. To do this, we created Maze. Maze routes known content scrapers who are visiting ScrapeShield-protected sites into a virtual labyrinth of gibirish and gobbledygook. We dynamically throttle the bandwidth and speed so instead of the pages loading as fast as possible, the connection is held open to the scrapers and their resources are tied up.

We use excess resources on the CloudFlare network to generate Maze, and it doesn't consume any of our publishers' resources or add any additional load to their sites. What's beautiful about the system is that the only way that content scrapers can be sure they're avoiding Maze is to avoid CloudFlare's IP addresses entirely. For any content scrapers who may be reading this, here's a helpful list of all of our IPs so you can make sure to stay away.

No Pinning

Finally, with the rise of sites like Pinterest, innocent content scraping may become even more prolific. While many sites welcome their images being pinned, we wanted to make it easy to opt out. ScrapeShield includes an option to add the no-pinning meta tag to your site to prevent your images from being pinned to the site. As other similar services include a mechanism to opt out, expect that we'll include an easy way for you to do so right from the ScrapeShield interface.

The health of the web depends on publishers creating original content getting credit for their creations. Cloud Flare is committed to building a better web and we're extremely excited about ScrapeShield as a new tool to help publishers do exactly that.

Source: https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/

Wednesday, 6 May 2015

Kimono Is A Smarter Web Scraper That Lets You “API-ify” The Web, No Code Required

A new Y Combinator-backed startup called Kimono wants to make it easier to access data from the unstructured web with a point-and-click tool that can extract information from webpages that don’t have an API available. And for non-developers, Kimono plans to eventually allow anyone track data without needing to understand APIs at all.

This sort of smarter “web scraper” idea has been tried before, and has always struggled to find more than a niche audience. Previous attempts with similar services like Dapper or Needlebase, for example, folded. Yahoo Pipes still chugs along, but it’s fair to say that the service has long since been a priority for its parent company.

But Kimono’s founders believe that the issue at hand is largely timing.

“Companies more and more are realizing there’s a lot of value in opening up some of their data sets via APIs to allow developers to build these ecosystems of interesting apps and visualizations that people will share and drive up awareness of the company,” says Kimono co-founder Pratap Ranade. (He also delves into this subject deeper in a Forbes piece here). But often, companies don’t know how to begin in terms of what data to open up, or how. Kimono could inform them.

Plus, adds Ranade, Kimono is materially different from earlier efforts like Dapper or Needlebase, because it’s outputting to APIs and is starting off by focusing on the developer user base, with an expansion to non-technical users planned for the future. (Meanwhile, older competitors were often the other way around).

The company itself is only a month old, and was built by former Columbia grad school companions Ranade and Ryan Rowe. Both left grad school to work elsewhere, with Rowe off to Frog Design and Ranade at McKinsey. But over the nearly half-dozen or so years they continued their careers paths separately, the two stayed in touch and worked on various small projects together.

One of those was Airpapa.com, a website that told you which movies were showing on your flights. This ended up giving them the idea for Kimono, as it turned out. To get the data they needed for the site, they had to scrape data from several publicly available websites.

“The whole process of cleaning that [data] up, extracting it on a schedule…it was kind of a painful process,” explains Rowe. “We spent most of our time doing that, and very little time building the website itself,” he says. At the same time, while Rowe was at Frog, he realized that the company had a lot of non-technical designers who needed access to data to make interesting design decisions, but who weren’t equipped to go out and get the data for themselves.

With Kimono, the end goal is to simplify data extraction so that anyone can manage it. After signing up, you install a bookmarklet in your browser, which, when clicked, puts the website into a special state that allows you to point to the items you want to track. For example, if you were trying to track movie times, you might click on the movie titles and showtimes. Then Kimono’s learning algorithm will build a data model involving the items you’ve selected.

That data can be tracked in real time and extracted in a variety of ways, including to Excel as a .CSV file, to RSS in the form of email alerts, or for developers as a RESTful API that returns JSON. Kimono also offers “Kimonoblocks,” which lets you drop the data as an embed on a webpage, and it offers a simple mobile app builder, which lets you turn the data into a mobile web application.

For developer users, the company is currently working on an API editor, which would allow you to combine multiple APIs into one.

So far, the team says, they’ve been “very pleasantly surprised” by the number of sign-ups, which have reached ten thousand*. And even though only a month old, they’ve seen active users in the thousands.

Initially, they’ve found traction with hardware hackers who have done fun things like making an airhorn blow every time someone funds their Kickstarter campaign, for instance, as well as with those who have used Kimono for visualization purposes, or monitoring the exchange rates of various cryptocurrencies like Bitcoin and dogecoin. Others still are monitoring data that’s later spit back out as a Twitter bot.

Kimono APIs are now making over 100,000 calls every week, and usage is growing by over 50 percent per week. The company also put out an unofficial “Sochi Olympics API” to showcase what the platform can do.

The current business model is freemium based, with pricing that kicks in for higher-frequency usage at scale.

The Mountain View-based company is a team of just the two founders for now, and has initial investment from YC, YC VC and SV Angel.

Source: http://techcrunch.com/2014/02/18/kimono-is-a-smarter-web-scraper-that-lets-you-api-ify-the-web-no-code-required/