Skip to content

data-processing/spark-redshift

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RedshiftInputFormat

Hadoop input format for Redshift tables unloaded with the ESCAPE option.

Usage in Spark Core:

import com.databricks.examples.redshift.input.RedshiftInputFormat

val records = sc.newAPIHadoopFile(
  path,
  classOf[RedshiftInputFormat],
  classOf[java.lang.Long],
  classOf[Array[String]])

Usage in Spark SQL:

import com.databricks.examples.redshift.input.RedshiftInputFormat._

// Call redshiftFile() that returns a SchemaRDD with all string columns.
val records: SchemaRDD = sqlContext.redshiftFile(path, Seq("name", "age"))

About

Spark and Redshift integration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 100.0%