2

閱讀簡單的文本文件,如下面博客給出如何從谷歌雲存儲使用火花斯卡拉本地程序

https://cloud.google.com/blog/big-data/2016/06/google-cloud-dataproc-the-fast-easy-and-safe-way-to-try-spark-20-preview

我試圖用火花斯卡拉閱讀從谷歌雲存儲文件。 對於我已導入谷歌雲存儲連接器和谷歌雲存儲如下,

// https://mvnrepository.com/artifact/com.google.cloud/google-cloud-storage 
compile group: 'com.google.cloud', name: 'google-cloud-storage', version: '0.7.0' 

// https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector 
compile group: 'com.google.cloud.bigdataoss', name: 'gcs-connector', version: '1.6.0-hadoop2' 

之後創建像下面一個簡單的斯卡拉目標文件, (創建一個sparkSession)

val csvData = spark.read.csv("gs://my-bucket/project-data/csv") 

但拋出錯誤,

17/03/01 20:16:02 INFO GoogleHadoopFileSystemBase: GHFS version: 1.6.0-hadoop2 
17/03/01 20:16:23 WARN HttpTransport: exception thrown while executing request 
java.net.SocketTimeoutException: connect timed out 
    at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) 
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) 
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) 
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) 
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) 
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
    at java.net.Socket.connect(Socket.java:589) 
    at sun.net.NetworkClient.doConnect(NetworkClient.java:175) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) 
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:308) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:326) 
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) 
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) 
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) 
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) 
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) 
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981) 
    at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:158) 
    at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) 
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:205) 
    at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:70) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1816) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:1003) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:966) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) 
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287) 
    at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:317) 
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149) 
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:413) 
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:349) 
    at test$.main(test.scala:41) 
    at test.main(test.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) 

我也設置了所有的認證。不知道如何刷新超時。

編輯

我試圖通過的IntelliJ IDEA(Windows)中上面的代碼運行。 相同代碼的JAR文件在Google Cloud DataProc上正常工作,但在通過本地系統運行時出現以上錯誤。 我已經在IntelliJ中安裝了Spark,Scala和Google Cloud插件。

一件事, 我創造了Dataproc實例,並試圖連接到外部的IP地址作爲文檔中給出, https://cloud.google.com/compute/docs/instances/connecting-to-instance#standardssh

這是無法連接到服務器給超時錯誤

回答

2

謝謝Dennis爲此問題指出方向。由於我使用Windows操作系統,因此沒有core-site.xml,因爲hadoop不適用於Windows。

我已經下載預構建的火花,並在代碼本身配置由你提到的參數如下

給出創建一個SparkSession,並使用它的可變配置Hadoop的參數一樣spark.SparkContext.hadoopConfiguration.set("google.cloud.auth.service.account.json.keyfile","<KeyFile Path>"),而我們需要設置所有其它參數在core-site.xml中。

設置完所有這些後,程序可以訪問Google雲端存儲中的文件。

3

您需要將google.cloud.auth.service.account.json.keyfile設置爲您在these instructions for generating a private key之後創建的服務帳戶的json憑證文件的本地路徑。堆棧跟蹤顯示連接器認爲它位於GCE VM上,並試圖從本地元數據服務器獲取證書。如果這不起作用,請嘗試設置fs.gs.auth.service.account.json.keyfile

嘗試SSH時,你有沒有試過gcloud compute ssh <instance name>?您可能還需要檢查計算引擎防火牆規則,以確保允許端口22上的入站連接。

+0

我已經下載了服務帳戶密鑰的json憑證文件,並將其設置爲環境變量GOOGLE_APPLICATION_CREDENTIALS,因爲我使用的是Windows操作系統並試圖運行該程序,但我得到了相同的TimeOut錯誤。我希望我已採取正確的方式來實施您提出的有關google.cloud.auth.service的建議。account.json.keyfile到json文件的本地路徑。如果沒有,請糾正我。我不確定在哪裏設置fs.gs.auth.service.account.json.keyfile。如果有任何文檔可用,那麼請建議在Windows操作系統下需要什麼樣的配置。 – Shawn

+0

當嘗試SSH時,我已經嘗試了gcloud compute ssh ,正如你提到的那樣,但它也給我TimeOut錯誤。 – Shawn

+0

令我驚訝的是,我可以使用Storage類在Google雲端存儲中創建存儲桶。不知道從桶中讀取文件有什麼問題。 – Shawn