2017-02-28 70 views
1

我在AWS Elastic Map Reduce 5.3.1中使用spark-shell與Spark 2.1.0從Postgres數據庫加載數據。 loader.load總是失敗,然後成功。爲什麼會發生?關於EMR Spark,JDBC加載首次失敗,然後運行

[[email protected][SNIP] ~]$ SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell --driver-class-path ~/postgresql-42.0.0.jar 
Spark Command: /etc/alternatives/jre/bin/java -cp /home/hadoop/postgresql-42.0.0.jar:/usr/lib/spark/conf/:/usr/lib/spark/jars/*:/etc/hadoop/conf/ -Dscala.usejavacp=true -Xmx640M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=/home/hadoop/postgresql-42.0.0.jar --class org.apache.spark.repl.Main --name Spark shell spark-shell 
======================================== 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
17/02/28 17:17:52 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 
17/02/28 17:18:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException 
Spark context Web UI available at http://[SNIP] 
Spark context available as 'sc' (master = yarn, app id = application_1487878172787_0014). 
Spark session available as 'spark'. 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 2.1.0 
     /_/ 

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121) 
Type in expressions to have them evaluated. 
Type :help for more information. 

scala> val loader = spark.read.format("jdbc") // connection options removed 
loader: org.apache.spark.sql.DataFrameReader = [email protected] 

scala> loader.load 
java.sql.SQLException: No suitable driver 
    at java.sql.DriverManager.getDriver(DriverManager.java:315) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32) 
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125) 
    ... 48 elided 

scala> loader.load 
res1: org.apache.spark.sql.DataFrame = [id: int, fsid: string ... 4 more fields] 
+0

你有沒有遇到一個解決方案嗎?在當前的EMR版本中看到相同的行爲。還要ping @Raje。 – kadrach

+0

解決了我的問題:) – kadrach

回答

0

我也遇到了同樣的問題。我正嘗試通過使用JDBC的Spark連接到Vertica。我使用: 火花殼 星火版本2.2.0 Java版本是1.8

外部罐子連接: Vertica的-8.1.1_spark2.1_scala2.11-20170623.jar Vertica的-JDBC-8.1。 1-0.jar

代碼連接:

import java.sql.DriverManager 
import com.vertica.jdbc.Driver 


val jdbcUsername = "<username>" 
val jdbcPassword = "<password>" 
val jdbcHostname = "<vertica server>" 
val jdbcPort = <vertica port> 
val jdbcDatabase ="<vertica DB>" 
val jdbcUrl = s"jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}" 

val connectionProperties = new Properties() 
connectionProperties.put("user", jdbcUsername) 
connectionProperties.put("password", jdbcPassword) 

val connection = DriverManager.getConnection(jdbcUrl, connectionProperties) 
java.sql.SQLException: No suitable driver found for jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword} 

    at java.sql.DriverManager.getConnection(Unknown Source) 
    at java.sql.DriverManager.getConnection(Unknown Source) 
    ... 56 elided 

如果我運行相同的命令第二次,我得到下面的輸出,並建立連接

scala> val connection = DriverManager.getConnection(jdbcUrl, connectionProperties) 
connection: java.sql.Connection = [email protected] 
0

今天我用PySpark和sqlserver jdbc驅動程序遇到了這個問題。起初,我建立了一個簡單的解決方法 - 捕獲Py4JJavaException並重試,在第二次工作的地方。

訣竅是在DataStreamReader.jdbc方法中指定驅動程序類。

使用pyspark:

spark.read.jdbc(..., properties={'driver': 'com.microsoft.sqlserver.jdbc.SQLServerDriver'}) 

那麼這就是所有需要的是

spark-submit --jars s3://somebucket/sqljdbc42.jar script.py 

使用Scala和@ Raje的例子,connectionProperties.put("driver", "...")