Category Archives: Scala

Understanding Co-partitions and Co-Grouping In Spark

The RDD’s in spark are partitioned, using Hash Partitioner by default. Co-partitioned RDD’s uses same partitioner and thus have their data distributed across partitions in same manner.

val data = Array(1, 2, 3, 4, 5)
val rdd1= sc.parallelize(data,10)
val data2 = Array(5,8,9,10,2)
rdd2=sc.parallelize(data2,10)

In both of the above defined RDD’s ,same partitioner is used i.e HashPartitioner. HashPartitioner will partition the data in the same way for both RDD’s,same data values in two different RDD will give same Hashvalue. As the number of partitiones specified is also same. These co-partitioned RDD’s reduces the shuffling in network to a great extent. As all the keys required for keyBy transformations will be present in two same partitions of two different RDD’s.

Co-grouping utilizes concept of Co-Partitioning to provide efficient performance improvement when multiple RDD’s are to be joined, over using join again and again. As with every join operation the destination RDD will either have supplied or default value of partitions and the join may or may not require shuffling of two RDD’s that are to be joined based on, if they were co-partitioned and had same number of partitions.

rdd3=rdd1.join(rdd2)

Since rdd1 and rdd2 used same partitioner and also had same number of partitions, the join operation that produces rdd3 will not require any shuffle. But if rdd1 and rdd2 had different number of partitions than the content of rdd with small number of partitions would have been reshuffled.Since number of partitions are not specified, the will depend on default configuration.

Performing another join using rdd3 and rdd4 to create rdd5 will lead to chances of more shuffling. All these shuffling and expensive operations can be avoided by using cogroup when we have multiple RDD’s to be joined.

rdd5=rdd1.cogroup(rdd2,rdd3)

As the cogroup will create co-partitioned RDD’s

Writing Spark Data Frame to HBase

Community behind Spark has made lot of effort’s to make DataFrame Api’s very efficient and scalable. Reading and writing data, to and, from HBase to Spark DataFrame, bridges the gap between complex sql queries that can be performed on spark to that with Key- value store pattern of HBase. The shc connector implements the standard Spark Datasource API, and leverages the Spark Catalyst engine for query optimization.

To map a table in HBase with the table in Spark , we define a Table catalog.Which stores the mapping of keys, column qualifier and column family in HBase with that of table columns in spark.

In order to work more efficiently and making sure not to retrieve unwanted data from region servers.shc connector supports predicate pushdown where the filter conditions are pushed to data as close as possible i.e regionserver in case of HBase.

Support for partition pruning splits the Scan/BulkGet into multiple non-overlapping ranges, only the region servers that has the requested data will perform Scan/BulkGet.

Specifying conditions like Where x >y or Where x

session.sql('Select * from sparkHBase table where x>14567 and x<14568')

will result in a scan operation on HBase with key range between 14567 and 14568

timestamp temp pressure
1501844115 24 760
1501844125 28 800

The above table presented by Spark DataFrame can be saved to HBase by providing the mapping for key, column qualifiers, column name in HBase

def catalog = s"""{
|"table":{"namespace":"default", "name":"ToolLogs"},
|"rowkey":"key",
|"columns":{
|"timestamp":{"cf":"rowkey", "col":"key", "type":"long"},
|"temp":{"cf":"msmt", "col":"temp", "type":"float"},
|"pressure":{"cf":"msmt", "col":"pressure", "type":"float"}
|}
|}""".stripMargin

specifying “cf”:”rowkey” for the key column is mandatory though we had msmt as our column family for HBase table,this is how the API is designed to work. Once we have defined the catalog mapping for our Table Catalog, we can store the data in dataframe directly to HBase using


df.write.options(
Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> “5”))
.format(“org.apache.spark.sql.execution.datasources.hbase”)
.save()

Happy Reading …. ☺

Method can take data type as parameter

I am in love and this time in love with Scala ,what a great language .Very elegantly designed  and written i thought i knew everything about method overriding when i came across this loving ,fantastic and a  great feature of Scala that allows one to pass the data type as parameter

Function syntax

def <function-name>[type-name](<parameter-name>:<type-name>) : <type-name> ..


object ParametrizeFuncType {
def main(args:Array[String]){
/**
* prints true and takes String
*/
println(identityfunc[String]("amit":String).isInstanceOf[String])
/**
* prints true and takes int
*/
println(identityfunc[Int](1:Int).isInstanceOf[Int])
/*
* wow! classic way for method overriding made data type as parameter
*/
}
def identityfunc[T](a:T)=a
}

you just need not to write multiple methods taking different data types .Just write once and run it with different types and you too will fall in Love 🙂

Hidden Secrets while trying SQL Dataframes

There are few points that one should keep in mind while using DataFrames with SparkSql like when to import sqlcontext.implicits._  and how to deal with No Type Definetion found error

Let’s try to understand it with an example

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
/**
* @author amithora
*/
object SqlContextJsonRDD {
case class Employee(name: String,rank: Int)// note this class declaration if you declare the class inside the method you will get type not found error
def main(args: Array[String]): Unit = {
val sparkConf=new SparkConf().setMaster("local[2]").setAppName("ImplicitExampleSQL")
val sparkContext=new SparkContext(sparkConf)
val sqlContext=new SQLContext(sparkContext)
import sqlContext.implicits._ // note this import if you try to import this after package declaration you will get compilation error this is used to use toDF() method if you don't import this you will not find toDF()
val textRdd=sparkContext.textFile("Person.txt", 2)

val personDF=textRdd.map { line =>{
line.split(",") }
}.map { p => Employee(p(0),p(1).trim().toInt)
}.toDF()

}
}

Scala Fold operation Explained!

Fold takes the data in one format and returns in other,Fold on list takes a initial value and function with two arguments that is to be applied to all the elements of the list.
In initial iteration the initial value (Zero/neutral value) passed as the first argument of the function while the current element of the list on which we are iterating as the second argument,In the next iteration the result of the fist iteration is passed to the first argument while the current element to the second and the process goes on
Below is an example of finding maximum from a list of elements in scala

we have passed the value at position 0 of the list as the initial value
In first Iteration
the value at 0 position i.e 2 is passed as first argument to function i.e assigned to min
the current value on which we are iterating is 2 this is assigned to max as the second argument to function

In Second Iteration
the result of the first iteration i.e 2 is passed as the first argument to the function i.e assigned to min
the current value on which we are iterating i.e 1 is passed as the second parameter to function i.e assigned to max

and the story goes on ….. 🙂

package example

/**
* @author amith2
*/
object ScalaFoldMIn {
def main(args: Array[String]): Unit = {
val scalaData=List(2,1,3,5)
println(scalaData.fold(scalaData(0))((min,max)=>{
if(min<max)
min
else
max
}))
}
}