你好,
我想設置默認“spark.driver。maxResultSize”筆記本on my cluster. I know I can do that in the cluster settings, but is there a way to set it by code?
我也知道怎麼做當我啟動一個火花會話,但是在我的情況下我直接從特征存儲負載和想要改變我的熊貓pyspark數據幀。
從磚進口feature_store進口的熊貓pd pyspark.sql進口。從操作係統函數作為f。導入路徑加入fs = feature_store.FeatureStoreClient () prediction_data = fs.read_table (name =名字)prediction_data_pd = prediction_data.toPandas ()