2017-09-15 70 views
0

面臨的問題,而試圖從星火 下面連接到BigTable的是,我在執行連接碼頭ALPN/NPN異常而從星火,Scala代碼連接到BigTable的

INFO grpc.BigtableSession: Bigtable options: BigtableOptions{dataHost=bigtable.googleapis.com, tableAdminHost=bigtableadmin.googleapis.com, instanceAdminHost=bigtableadmin.googleapis.com, projectId=ProjectID, instanceId=InstanceForBigTable, userAgent=hbase-1.2.5, credentialType=DefaultCredentials, port=443, dataChannelCount=20, retryOptions=RetryOptions{retriesEnabled=true, allowRetriesWithoutTimestamp=false, statusToRetryOn=[UNAVAILABLE, DEADLINE_EXCEEDED, UNAUTHENTICATED, INTERNAL, ABORTED], initialBackoffMillis=5, maxElapsedBackoffMillis=60000, backoffMultiplier=2.0, streamingBufferSize=60, readPartialRowTimeoutMillis=60000, maxScanTimeoutRetries=3}, bulkOptions=BulkOptions{asyncMutatorCount=2, useBulkApi=true, bulkMaxKeyCount=25, bulkMaxRequestSize=1048576, autoflushMs=0, maxInflightRpcs=1000, maxMemory=715862835, enableBulkMutationThrottling=false, bulkMutationRpcTargetMs=100}, callOptionsConfig=CallOptionsConfig{useTimeout=false, shortRpcTimeoutMs=60000, longRpcTimeoutMs=600000}, usePlaintextNegotiation=false}. 

Exception in thread "BigtableSession-startup-1" java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been properly configured. 
    at com.google.bigtable.repackaged.io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(GrpcSslContexts.java:159) 
    at com.google.bigtable.repackaged.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:136) 
    at com.google.bigtable.repackaged.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:124) 
    at com.google.bigtable.repackaged.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:94) 
    at com.google.bigtable.repackaged.com.google.cloud.bigtable.grpc.BigtableSession.createSslContext(BigtableSession.java:132) 
    at com.google.bigtable.repackaged.com.google.cloud.bigtable.grpc.BigtableSession.access$000(BigtableSession.java:84) 
    at com.google.bigtable.repackaged.com.google.cloud.bigtable.grpc.BigtableSession$2.run(BigtableSession.java:159) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:748) 

代碼面對錯誤,我我試圖創建連接

val tableDF: DataFrame = sqlContext.read.options(Map(HBaseTableCatalog.tableCatalog -> hbaseReadSchema)) 
     .format("org.apache.spark.sql.execution.datasources.hbase").load() 

任何人都可以提供上述錯誤的原因嗎?

+0

請從第一行刪除PII。另外,我們通過hortornworks-shc在這裏有一個SparkSQL的工作示例:https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/scala/bigtable-shc –

回答

0

這很可能是依賴性問題。

使用bigtable-hbase-1.x-hadoop:1.0.0-pre3應該修復大多數可能的依賴陷阱。它遮蔽了所有bigtable依賴關係,並且旨在與hadoop環境良好配合。

正如Solomon在他的評論中指出的那樣,請在github上的示例之後建模您的依賴關係。