下麵的函數執行罰款:
def autoload_to_table (data_source、source_format table_name checkpoint_directory):
查詢= (spark.readStream
.format (“cloudFiles”)
.option (“cloudFiles。形式at", source_format)
.option (“cloudFiles。schemaLocation”, checkpoint_directory)
.load (data_source)
.writeStream
checkpoint_directory .option (“checkpointLocation”)
.option (“mergeSchema”,“真正的”)
.table (table_name))
返回查詢
當調用函數時發生錯誤。請讓我知道問題在哪裏。
. lang。UnsupportedOperationException:方式com.databricks.backend.daemon.data.client.DBFSV1。resolvePathOnPhysicalStorage(路徑:路徑)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Py4JJavaError回溯(最近調用最後)
模塊在< <命令- 1919808831305181 > >
- - - - - > 1查詢= autoload_to_table (data_source = f“{DA.paths.working_dir} /跟蹤器”,
2 source_format = " json ",
3 table_name =“target_table”,
4 checkpoint_directory = f“{DA.paths.checkpoints} / target_table”)
<命令- 1919808831305179 > autoload_to_table (data_source、source_format table_name checkpoint_directory)
1 def autoload_to_table (data_source、source_format table_name checkpoint_directory):
- - - - - > 2查詢= (spark.readStream
3 .format (“cloudFiles”)
4 .option (“cloudFiles。形式at", source_format)
5 .option (“cloudFiles。schemaLocation”, checkpoint_directory)
/磚/火花/ python / pyspark / sql /流。py負載(自我、路徑、格式、模式* *選項)
450年籌集ValueError(“如果路徑提供了流,它需要一個“+
451年“非空字符串。不支持的路徑列表。”)
- - > 452年返回self._df (self._jreader.load(路徑))
453年:
454年返回self._df (self._jreader.load ())