Magic Analytics
  • Home
  • Python
    • Pandas
    • Matplotlib
    • Interactive Visualization
    • Folium
  • Spark
    • DataFrame
  • Machine Learning
    • Classification >
      • Logistic Regression
    • Dimension Reduction
    • Model Explaination
  • Blog
  • About

Aries Research Note

PySpark vs. Pandas (Part 5: SQL-windows function)

10/31/2016

0 Comments

 
In SQL, by defining a specific "window", one can perform "calculation across a set of table rows that are somehow related to the current row" (from PostgreSQL). This greatly extend SQL's analytics power.

The implementation in PySpark is quite close (syntactically) to SQL, one have to define a "window" literally; while for Pandas, although it also has a "window" function, I found it is more like a rolling window, rather the SQL's window functionality. (correct me if I am wrong). 

My preferred way to replicate windows function in Pandas and PySpark is below:

    
Picture

    
Picture
From this comparison, looks like PySpark is more SQL-straightforward, with a nice syntax to do that. However, actually Pandas is more versatile and many different functionality could be defined from the "assign" keyword. Good or bad? I think it depends. 
0 Comments

Plotly: subplots in figure (Part 2)

10/30/2016

0 Comments

 
The caveat in Part 1 is about Pie chart: if one trying to replace any go.Bar or go.Histogram with go.Pie chart, there will be an error showing (plotly version 1.12.9) 

    
Picture
The reason for this is because the "tools.make_subplots" function creates a set of subplots based on different xaxis and yaxis, while go.Pie does not require (and it does not have) an axis property. 

The way to overcome such challenge is to build up the graph from scratch, using "domain" and "anchor".

To explain the concept, take a look at the "layout" in Part 1's example. Each layout axis (x and y) is attached with a specific domain on this figure, with its axis "anchored" with a specific data. subplot-1 is on upper left, so its xaxis domain is [0, 0.45] (left side), and yaxis domain is [0.625, 1] (upper side).
Picture
For Pie plot, since it cannot have axis, the make_subplots approach fake, but following approach still works

    
Picture
I have to say, I hate to use the Annotation as a way to make the subplot's title ... however, I currently didn't find any other way to directly assign subplot title. If you got any better approach to simplify the code in general, feel free to comment on this blog. 
0 Comments

Plotly: subplots in figure (Part 1)

10/30/2016

1 Comment

 
In Matplotlib, subplot can be easily pulled out as following:

    
However, matplotlib is not quite smart to handle the axis ticks rotation (sometimes they collapse together and hard to visualize), and this could be troublesome for some automatic visualization process. In general, I found plotly offer a better automated layout.

The way subplots in plotly and matplotlib are conceptually different on:
1. (matplotlib) figure --- axes --- artist, so that one figure can contain multiple axes, and each axes has their own set of artist. For example, legend is an artist, and each axes could have its own legend
2. (plotly) the main component in a figure is "data" and "layout", the way subplot works is to create multiple data, put into different axis. This is still one big figure. each subplot is just different axis(x,y) located on different locations in this big Figure. 

Here is a few action

    

    
Picture
1 Comment

Plotly: Basic Settings for Data Science

10/25/2016

1 Comment

 
There are million ways people can use one software, however, this is my preferred way (may not be optimal, but workable). 
As a data scientist, mostly I want to use Plotly for interactive exploratory analytics since it provides way to get better feeling about data. 

    

    
1 Comment

PySpark vs. Pandas (Part 4: set related operation)

10/24/2016

0 Comments

 
The "set" related operation is more like considering the data frame as if it is a "set". Common set operations are: union, intersect, difference. Pandas and PySpark have different ways handling this. 

In Pandas, since it has the concept of Index, so sometimes the thinking for Pandas is a little bit different from the traditional Set operation. 

    
While for Spark, it is quite easy since Spark is so close to SQL, it directly has those keywords implemented

    
So in this round of comparison, Spark is more intuitive than Pandas to handle SQL set related operation. 
0 Comments

PySpark vs. Pandas (Part 3: group-by related operation)

10/23/2016

0 Comments

 
Group-by is frequently used in SQL for aggregation statistics. To get any big-data back into visualization, Group-by statement is almost essential.

    
In my opinion, none of the above approach is "perfect". For Pandas, one need to do a "reset_index()" to get the "Survived" column back as a normal column; for Spark, the column name is changed into a descriptive, but very long one. 

​For Spark, we can introduce the alias function for column to make things much nicer

    
All above are for "simple" aggregations, like those already pre-exist in Pandas or Spark, what about complicated ones? Like some weighted average or square sum? 

The complicated cases could be considered as:
1. aggregation on single column (like square sum)
2. aggregation on multiple columns (like weighted average based on another column)

Certainly, before we going to complicated on the aggregation, it is always easier to just create a new column (to do all the heavy lifting), and then simply aggregate on that specific column! While, here I just want to show that Pandas offer a few more flexibility

    
0 Comments

PySpark vs. Pandas (Part 2: join-related operation)

10/23/2016

0 Comments

 
Data is usually spread out in different tables, and insights are extracted when merging all information together: join related operators are very important to get this done. 

There are three kinds of join operators:
1. join by key(s)
2. join as set operator on Rows
3. join as set operator on Columns

    
The only difference (and potential problem) here is Pandas automatically change the same (non-key) column name with adding appendix to avoid name duplication, while Spark just keep the same name! Although there is a way to still referring the right "Survived" column, it is not quite convenient. So the following would be the recommended way: rename the collision column first. 

    
The second kind of join is more like set operator, basically considering two DFs as if two set, and take its "intersection", "difference", or "union"

    
The third kind of join is to extend the current data frame along the its index. It is similar (most time) as if joining the same key(s) with more extra column, but in Pandas, one can extend the column according to its index. 

    
0 Comments

PySpark vs. Pandas (Part 1: select and filter)

10/22/2016

0 Comments

 
As long as the data can be loaded fully into memory, Pandas is a great data analytic tool. However, with data amount much bigger Spark comes into the play. Pandas and PySpark DF have different APIs, and it is very easy to get confused or not knowing the best practice. I want to summarize my best practice so that others will take less detour.

    
0 Comments

Pandas: reshape data frame

10/2/2016

1 Comment

 
A data frame has its index, columns, and values inside. Any selection operation usually not affect the table's structure, but only "take selected pieces" out. Other operations may change its structure, like "group-by" operation. So how to change the structure back?

In excel, there is a concept of pivot table, which convert one or more columns into index/column, and nicely present the data. This is quite a nice feature and very fast provide analytic insights. Does Pandas support this?

The answer is "for sure!". Here are a few functions very often used in Pandas to manipulate the "shape" of data frame.

1. reset_index / set_index
    Very self-explanatory ... while reset_index change the an index back to a column, set_index move a column into the index.

2. pivot and pivot_table
    It always get confusing (to me) how to do pivot table in Pandas, while Nikolay Grozev's blog provides a very intuitive visualization. I will use one of them here for easy illustration, and it is encouraged to go to his blog for more details.
Picture
Picture
As you can see, pivot_table could considered as a "advanced" pivot, where the table is created with more control on which aggregation function to use, while pivot provides a faster way to just "reshape" the data frame into the one needed.
Picture
3. stack and unstack
   In Nikolay Grozev's blog, this section is also very well illustrated. I borrow on figure here, and the reader is highly recommend to check the details in original blog.
Picture
Let's see how it works in the Titanic data set, this is how it looks like:
Picture
Picture
Picture
Now, changing a data frame shape should not be a problem any more.
1 Comment

Pandas: group-by-aggregation deep dive

10/2/2016

0 Comments

 
A friend used to ask me one question: what is the function in Pandas that similar to R's summarize (as in dplyr)? Surprisingly,  I was not able to give a straight answer. However, after some digging, finally find a (somewhat) satisfactory answer.

First, let's look at how summarize in dplyr works (the code is borrowed from RStudio:

    
Other functions aside, focusing on the "summarise" function, one can easily specify the alias "arr" and "dep", logic function "mean", columns working over "arr_delay" and "dep_delay", and even conditional requirements "na.rm". This is very powerful.

While looking at the alternative in Pandas, let's only focusing on the "summarise" part and with the help of Titanic data set:

    
Picture
The functionality looks similar, but ... what about trying to have not only "mean", but also "std", "max" over the same column? also with different alias as if dplyr's "arr" and "dep"? Then we have to change the code into:

    
Picture
While ... what is this multiple level of columns? This is one concept in Pandas as "MultiIndex". Personally, I find MultiIndex over column hard to manipulate, so I prefer to drop it after the aggregation. The way to do this is:

    
Picture
However, is there a way to do EVERYTHING in one line? I don't like to define a "df1" and change its columns. Here is a trick, specify the columns after the "groupby", magic will happen :)

    
Picture
Now mission completed :)
0 Comments
<<Previous

    Author

    Data Magician

    Archives

    October 2017
    April 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    Git
    Hive
    Machine Learning
    Matplotlib
    Pandas
    Plotly
    Python
    R
    Spark

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Python
    • Pandas
    • Matplotlib
    • Interactive Visualization
    • Folium
  • Spark
    • DataFrame
  • Machine Learning
    • Classification >
      • Logistic Regression
    • Dimension Reduction
    • Model Explaination
  • Blog
  • About