Saturday, 19 April 2014

Extracting Movie Information from IMDB

A Python Script to access Movie information from IMDB..
you can look at the script on my GIT repository.
GIT 
Goal : To extract Movie Information from IMDB.
Process : 
1) For data scrapping best python Library is Beautiful Soup.
2)Since I use Python3 Mechanize is not available so the work is a little difficult.
3)UrlLib2 module not found error resolved by  this code :
try:
    import urllib.request as urllib2
except:
    import urllib2


4)BeautifulSoup extracts information from the HTML code.
5)Search for a perticular title on IMDB.
6) Most of the time the first result is the one we are searching for .
7)Use Web Scrapping to extratct Movie Information.
8)Rating,Starcast,Critic Raiting and all the information is extracted.

rating = soup1.findAll('span',{'itemprop':'ratingValue'})[0].string

extracts rating from IMDB file . Here itemprop is a Defined class and You are extracting the first element of the itemprop class with name "rating Value"
Beautiful soup API useful for Scrapping
string  : returns the value.
text : returns the text associated with h1..h4, div tags.
other useful ways extract link using 

mainlink=soup.findAll('td',{'class':'result_text'})[0].a['href']


Wednesday, 26 March 2014

Python and NLP

I recently worked on a project titled "Recommending Similar defects on Apache Hadoop" .Its a recommendation system that predicts similar defects and then predicts the effort estimate for each defect.
Steps:
1) Extract XML/Excel data from Apache Hadoop Issue Tracker.
https://issues.apache.org/jira/browse/HADOOP
2)Convert the extracted data into CSV for persistent storage.
3)Extract required Column


Python COde :

import csv
import re

def col_selector(table, column_key):
    return [row[column_key] for row in table]

with open("Data/next.csv","r") as csvfile:
    reader = csv.DictReader(csvfile, delimiter=",")
    table = [row for row in reader]
    foo_col = col_selector(table, "Summary")
    bar_col = col_selector(table, "Description")

The above example extract two columns from Apache Hadoop Issue Tracker CSV file.  Your program must include python library called csv.py
http://docs.python.org/2/library/csv.html

4)From these Column we will generate a set of words specific to Hadoop.
We will apply various NLP to generate various words from the summary and description.

5)There are 5 Steps in Natural Language Processing 
1. Tokenizing
2. Stemming
3. Stop Word Removal
4. Vector Space Representation
5. Similarity Measures

Step 1 : Tokenizing :
 The tokenization process involves breaking a stream of characters of text up into words or phrases, symbols or other meaningful elements called tokens. Before indexing, we Fillter out all common English stopwords.I obtained a list of around 800 stopwords online. 
K. Bounge. Stop Word List.
https://sites.google.com/site/kevinbouge/stopwords-lists
The list contained articles, pronouns, verbs etc. I filtered out all those words from our extracted text. After reviewing the list, we felt stopwords list for a Hadoop Database has to be built separately, as numbers and sym-
bols are also to be filtered out. 


Step 2:
Stemming is used to try to identify a ground form for each word in the text. Some words that carry the same information can be used in different grammatical ways, depending on how the creator of the report wrote it down. This phase will remove a xes and other components from each token
that resulted from tokenization so that only the stem of each word remains. For stemming, we used a python library called PortorStemmer. We passed to it stream of extracted words. Words like caller, called, calling whose stem was call were Filtered and only 1 word, call, was kept in the nal list.I Filtered around 1200 words this way.


Step 3:
Stop Word Removal 
Synonyms removal and replace by 1 common word.I used wordnet NLTK to perform this.
Second Phase : Spell checking: List compared with list of misspelled words.

Step 4:
Vector Space representation.
After the first 3 steps I had around 5500 words. These words were used to identify tags.Each defect with tags was then represented into a Vector space model.Used general method used by scikit.

Step 4: Similarity Measure.
Calculated the cosine similarity between the two defect vectors.
 

Sunday, 2 March 2014

The Curious Case of Leonardo Di Caprio's Oscar :Sentiment Analyisis

I was very excited yesterday night for the Oscars as Leonardo Di Caprio was in the last few of Best actor nominees. Though he has done some brilliant movies in the past and he is a great actor , I was not confident with this movie getting him the award as I felt he has done much better work in other films . But Still fingers were crossed for Brilliant actor like Leonardo. I was just curious to see how twitter is doing with the Oscars. I did sentiment analysis on Tweets to see whats people point of view on Leonardo is Just before the Oscar . How many of them wanted him to win. How many feel that Leonardo is not the right person for oscars and someother actor should win it.
Sentiment Analysis on tweets gave me interesting results.
Steps :
1. Extract tweets with HashTag on Leonardo
2. Generate CSV of Tweets
3. Extract required information
4. Natural Language Processing - Tokenizing ,Stamming etc.
5.Classify them as Positive Negative Neutral
6.Apply Naivebayes.

Positive Tweets
RT @FindingSquishy_: If #Leonardo Di Caprio wins an Oscar tonight, Tumblr will probably break
if #Leonardo di Caprio doesn't win an oscar I am going to scream
RT @Mohammed_Meho: #Leonardo Di Caprio better win an Oscar tonight.
RT @Miralemcc: #The Wolf of the Wall Street and# Leonardo di Caprio for #Oscars2014



Negative Tweet
#Leonardo Di Caprio doesn't deserve and never has deserved an oscar. Deal with it

.............................................

Step1 : Step 1 is Scrapping tweets for the required tag. This can be done using the twitter API or You can use online sites for searching tweets and extract the search results from it. There are many sites that can give you direct Sentiment analysis results like NCSU project : 
http://www.csc.ncsu.edu/faculty/healey/tweet_viz/tweet_app/
Stanford Project : 
Sentiment140
http://www.sentiment140.com/
But I choose twitter seeker that just gives you search result without sentiments and I wanted to do Sentiment analysis myself. 
TwitterSeeker generates you a Excel sheet with all tweet information.

You can filter it by selecting language as english  In the image I applied no filter. 
Excel file generated will have user name ,time of posting,tweet and many other as option. In the current case I am only concerned with the tweet. 

STEP 2 : generate CSV of Tweets. 
For my data as input to ML algorithms , I used CSV file. CSV is Comma Seperated Value format in which each column is seperated by delimiter. After getting excel from twitter I converted into a CSV file. 

STEP 3: Extract Requried Information:
This is the step where your knowledge of Data mining will come into use. As in the present I am only concerned with one column that is tweet. Now general tweet is generally in a form 
Username @User #tag Link
which can very randomly.
Now I removed all the unnecessary words from it . All usernames tags and links.


#updated every day.






Saturday, 1 March 2014

Keyword Analysis on Apache Hadoop Issue Tracker

In my recent project titled "Recommending similar defects and Effort Estimate for Apache Hadoop Issue Tracker"
I recently wrote python code to extract most used hadoop specific keywords in the issue tracker after removing irrelevant words and stop words from the list.
I am classifying them into various classes like HDFS Hadoop Error MINING DataNode etc Some of the words that is found on the list are posted.
Click for the word list



List has approximately 4700 words .
Duplicate words were removed from the list.
The list analyzed first 200 defects from Hadoop Commons and Hadoop HDFS.
Both Summary and Description of the defects were analyzed and were selected based on their need on the defect analysis.


The stop word list is prepared by combining various lists available online like FoxStoplist.txt  
stopwords-lists 
I believe that this list might be useful for someone working on Language Processing for Issue related words and Hadoop Specific words.  
Defect Example : HDFS : 6001  
Description : When hdfs is set up with HA enable, FileSystem.getUri returns hdfs:// Here dfs.nameservices is defined when HA is enabled.
In documentation: This is probably ok or even intended. But a caller may further process the URI, for example, call URI.getHost(). This will return the 'mycluster', which is not a valid host anywhere.
Summary : In HDFS HA setup, FileSystem.getUri returns hdfs://  


Keywords : #Hdfs #dfs.nameservices # FileSystem #getUri #Nameservices #host #URIL #HA #returns

Saturday, 22 February 2014

Supervised Learning : A Mathematical Foundation

Supervised Learning as per Wiki

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances.


In Short : Learning from historical data a mapping between input and output variables and applying this to predict output for an unseen data.

STEPS of SUPERVISED LEARNING

  • Determine type of training example
    • What kind of data to be used.
      • Handwriting analysis : single handwritten character, word,line
  • Gather a Training set
  • Determine Input feature representation of learned function.
    • Input object transformed into feature vector.
  • Determine structure of learned function and corresponding learning algorithm.
    • SVM or DT
  • Complete the design.
    • Run Algorithm on Training set.
  • Evaluate accuracy.

 Mathematics behind Supervised Learning

Goal : Infer a function
f : X Y
Classifier from sample data An
An = ((x1, y1), ..., (xn, yn)) ∈ (X × Y )n.
Input and Output points xi and yi Y 


yi IR for regression problems, and yi is discrete for classification problems {-1, +1}

 
2 Hypothesis : 
1)Find a function f to model depandency in P(x,y) 
2)Error or loss between prediction f(x) desired output y.  

Loss function : L : Y ×Y IR+

If Binary Classification Y in {-1,+1}   L( h (x), y)) = 1/2 | h(x) − y|.
Unsupervised Learning a function can be
Lu( h(x)) = log( h(x)).
 Risk for a function or Generalization Error 

R( h ) =    L( h(x), y)dP(x, y). 
Classification : Function f that minimises R(f) but joint probablity P(x,y) unknown
 Using Input and Random variables Risk can be written as 

R(h) = \mathbf{E}[L(h(x), y)] = \int L(h(x), y)\,dP(x, y).


Empirical Risk Minimisation : 

Learning algo find Hypothesis h such that R(h) minimum




h^* = \arg \min_{h \in \mathcal{H}} R(h).
R(h) cannot be computed  so approximation Empirical Risk by leveraging loss  





\! R_\mbox{emp}(h) = \frac{1}{m} \sum_{i=1}^m L(h(x_i), y_i).
for Large numbers As m goes to infinity point wise convergence of h to R(h)
Minimise Rem
\hat{h} = \arg \min_{h \in \mathcal{H}} R_{\mbox{emp}}(h). 
 
Consistency is important : 
 
Thus learning depands upon function h or f in above case 
It has to have smooth boundries , regularization is applied : Regularization theory.

Regularized Risk : 
Ω ( f ) : Roughness Penalty


Risk Bounds 

H , An and δ , such that, for any f ∈ H , with a probability at least 1 − δ :
 
consider the case of a finite class of functions, |H | = N.   
Hoeffding Inequality by summing over set : 
  Risk =  sum of 2 terms : Empirical error and bound depand on size.
 As n -> infintiy second term tends to 0
Bias Variance Dilemma
H large one can find f that fits the data but noise due to large points resulting in poor performance 
Called Overfitting 

Overfitting according to Wiki
Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight).

VC dimension
Measure of capacity of function to different label of class
 

 


STRUCTURAL RISK MINIMISATION
Structural risk minimization seeks to prevent overfitting by incorporating a regularization penalty into the optimization. The regularization penalty can be viewed as implementing a form of Occam's razor that prefers simpler functions over more complex ones


 
 

select classifier that maximizes gamma  .



*Formulas taken from paper by cunningham  
http://tinyurl.com/ll4jxhq