訪問審計日誌
預覽
此功能已在公共預覽.
請注意
此特性需要磚溢價的計劃.
警告
Databricks SQL的審計日誌被臨時禁用。
Databricks提供對Databricks用戶執行的活動的審計日誌的訪問,允許您的企業監視詳細的Databricks使用模式。
日誌有兩種類型:
帶有工作空間級事件的工作空間級審計日誌。
包含帳戶級事件的帳戶級審計日誌。
有關每種類型的事件和相關服務的列表,請參見審計事件.
作為Databricks帳戶所有者或帳戶管理員,您可以配置將JSON文件格式的審計日誌交付到穀歌雲存儲(GCS)存儲桶,在該存儲桶中,您可以將數據用於使用情況分析.Databricks為帳戶中的每個工作空間提供單獨的JSON文件,並為帳戶級事件提供單獨的文件。
要配置審計日誌傳遞,必須設置一個GCS桶,允許Databricks訪問該桶,然後使用賬戶控製台定義一個日誌提供配置告訴Databricks將日誌發送到哪裏。
創建後不能編輯日誌傳遞配置,但可以使用帳戶控製台臨時或永久禁用日誌傳遞配置。您最多可以有兩個當前啟用的審計日誌傳遞配置。
配置日誌下發,請參見配置審計日誌傳遞.
配置詳細審計日誌
除了默認的事件,您可以通過啟用來配置工作空間以生成其他事件詳細審計日誌.
額外的筆記本的行為
審計日誌類別中的其他操作筆記本
:
動作名稱
runCommand
,在Databricks在筆記本中運行命令後發出。命令對應於筆記本中的一個單元格。請求參數:
notebookId
:筆記本IDexecutionTime
:命令執行的時間,單位為秒。這是一個十進製值,例如13.789
.狀態
:命令的狀態。可能的值是完成了
(命令完成),跳過
(該命令被跳過),取消了
(命令被取消),或者失敗的
(命令失敗)。commandId
:該命令的唯一ID。commandText
:命令的文本。對於多行命令,行之間用換行符分隔。
額外的Databricks SQL操作
審計日誌類別中的其他操作databrickssql
:
動作名稱
commandSubmit
,在向Databricks SQL提交命令時運行。請求參數:
commandText
:用戶指定的SQL語句或命令。warehouseId
: SQL倉庫ID。commandId
:命令ID。
動作名稱
commandFinish
,該命令在命令完成或命令取消時運行。請求參數:
warehouseId
: SQL倉庫ID。commandId
:命令ID。
檢查
響應
命令結果相關的附加信息字段。statusCode
—HTTP響應碼。如果是一般錯誤,則誤差為400。errorMessage
——錯誤消息。請注意
在某些情況下,對於某些長時間運行的命令,
errorMessage
字段不能在失敗時填充。結果
:這個字段是空的.
啟用/禁用詳細審計日誌
作為一個管理員,轉到Databricks管理控製台.
點擊工作空間設置.
旁邊詳細審計日誌,啟用或禁用該特性。
當啟用或禁用詳細日誌記錄時,將在類別中發出可審計事件工作空間
用行動workspaceConfKeys
.的workspaceConfKeys
請求參數是enableVerboseAuditLogs
.請求參數workspaceConfValues
是真正的
(功能啟用)或假
(功能禁用)。
延遲
在日誌交付配置完成後的一個小時內,審計交付將開始,您可以訪問JSON文件。
在審計日誌交付開始後,可審計事件通常在一小時內被記錄下來。新的JSON文件可能會覆蓋每個工作區的現有文件。重寫確保了精確一次的語義,而不需要對您的帳戶進行讀取或刪除訪問。
啟用或禁用日誌下發配置可能需要一個小時才能生效。
位置
發貨地點為:
gs://<桶-的名字>/<交付-路徑-前綴>/workspaceId=<workspaceId>/日期=<yyyy-毫米-dd>/auditlogs_<內部-id>。json
如果省略可選的傳遞路徑前綴,則不包含傳遞路徑< delivery-path-prefix > /
.
與任何單一工作空間不關聯的帳戶級審計事件被交付到workspaceId = 0
分區。
有關訪問這些文件和使用Databricks分析它們的更多信息,請參見分析審計日誌.
模式
Databricks以JSON格式提供審計日誌。審計日誌記錄模式如下所示。
版本
:審計日誌格式的模式版本。時間戳
:動作的UTC時間戳。workspaceId
:此事件關聯的工作區ID。對於不應用於任何工作區的帳戶級事件,可以將其設置為“0”。sourceIPAddress
:源請求IP地址。userAgent
:用於發出請求的瀏覽器或API客戶端。sessionId
:動作的會話ID。userIdentity
:發出請求的用戶信息。電子郵件
:用戶郵箱地址。
名
:記錄請求的服務。actionName
:操作,如登錄、注銷、讀、寫等。requestId
:唯一請求ID。requestParams
:被審計事件使用的參數鍵值對。響應
:對請求的響應。errorMessage
:發生錯誤時的錯誤信息。結果
:請求的結果。statusCode
:表示請求是否成功的HTTP狀態碼。
auditLevel
:指定這是否是一個工作空間級別的事件(WORKSPACE_LEVEL
)或帳戶級事件(ACCOUNT_LEVEL
).accountId
:該Databricks帳號的帳號ID。
審計事件
的名
和actionName
屬性在審計日誌記錄中標識審計事件。
工作空間級別的審計日誌可用於以下服務:
賬戶
集群
clusterPolicies
dbfs
精靈
globalInitScripts
組
iamRole
instancePools
工作
mlflowExperiment
筆記本
回購
秘密
sqlAnalytics
sqlPermissions
,當表訪問控製列表啟用時,它具有表訪問的所有審計日誌。ssh
工作空間
帳戶級審計日誌可用於以下服務:
accountBillableUsage
:訪問該帳戶的計費使用。logDelivery
:日誌下發配置。accountsManager
:在帳戶控製台中執行的操作。
帳戶級事件具有workspaceId
字段設置為有效的工作空間ID,如果它們引用與工作空間相關的事件,如創建或刪除工作空間。如果它們沒有與任何工作區關聯,則workspaceId
字段設置為0。
請注意
如果操作花費很長時間,則請求和響應將分別記錄,但請求和響應對具有相同的日誌
requestId
.除掛載相關操作外,Databricks的審計日誌中不包含與dbfs相關的操作。
自動操作(如由於自動伸縮而調整集群大小或由於調度而啟動作業)由用戶執行
係統用戶
.
請求參數
字段中的請求參數requestParams
下麵列出了每個支持的服務和操作,按工作空間級事件和帳戶級事件分組。
的requestParams
字段會被截斷。如果其JSON表示的大小超過100 KB,值將被截斷,而字符串...截斷
被附加到截斷的條目。在截斷的映射仍然大於100kb的極少數情況下,單個截斷
鍵的值為空。
工作空間級別的審計日誌事件
服務 |
行動 |
請求參數 |
---|---|---|
賬戶 |
添加 |
[" targetUserName”、“端點”、“targetUserId”) |
addPrincipalToGroup |
[" targetGroupId ", " endpoint ", " targetUserId ", " targetGroupName ", " targetUserName "] |
|
changePassword |
[" newPasswordSource ", " targetUserId ", " serviceSource ", " wasPasswordChanged ", " userId "] |
|
createGroup |
["端點”、“targetGroupId”、“targetGroupName”) |
|
刪除 |
[" targetUserId”、“targetUserName”、“端點”) |
|
garbageCollectDbToken |
[" tokenExpirationTime”、“標識”) |
|
generateDbToken |
(“標識”、“tokenExpirationTime”) |
|
jwtLogin |
(“用戶”) |
|
登錄 |
(“用戶”) |
|
注銷 |
(“用戶”) |
|
removeAdmin |
[" targetUserName”、“端點”、“targetUserId”) |
|
removeGroup |
[" targetGroupId”、“targetGroupName”、“端點”) |
|
resetPassword |
[" serviceSource ", " userId ", " endpoint ", " targetUserId ", " targetUserName ", " wasPasswordChanged ", " newPasswordSource "] |
|
revokeDbToken |
["標識"] |
|
samlLogin |
(“用戶”) |
|
setAdmin |
["端點”、“targetUserName”、“targetUserId”) |
|
tokenLogin |
[" tokenId”、“用戶”) |
|
validateEmail |
["端點”、“targetUserName”、“targetUserId”) |
|
集群 |
changeClusterAcl |
[" shardName ", " aclPermissionSet ", " targetUserId ", " resourceId "] |
創建 |
[" cluster_log_conf ", " num_workers ", " enable_elastic_disk ", " driver_node_type_id ", " start_cluster ", " docker_image ", " ssh_public_keys ", " aws_attributes ", " acl_path_prefix ", " node_type_", " instance_pool_id ", " spark_env_vars ", " init_scripts ", " spark_version ", " cluster_source ", " autotermination_minutes ", " cluster_name ", " autoscale ", " custom_tags ", " cluster_creator ", " enable_local_disk_encryption ", " idempotency_token ", " spark_conf ", " organization_id ", " no_driver_daemon ", " user_id "] |
|
createResult |
[" clusterName ", " clusterState ", " clusterId ", " clusterWorkers ", " clusterOwnerUserId "] |
|
刪除 |
[" cluster_id "] |
|
deleteResult |
[" clusterWorkers ", " clusterState ", " clusterId ", " clusterOwnerUserId ", " clusterName "] |
|
編輯 |
[" spark_env_vars ", " no_driver_daemon ", " enable_elastic_disk ", " aws_attributes ", " driver_node_type_id ", " custom_tags ", " cluster_name ", " spark_conf ", " ssh_public_keys ", " autotermination_minutes ", " cluster_source ", " docker_image ", " enable_local_disk_encryption ", " cluster_id ", " spark_version ", " autoscale ", " cluster_log_conf ", " instance_pool_id ", " num_workers ", " init_scripts ", " node_type_id "] |
|
permanentDelete |
[" cluster_id "] |
|
調整 |
[" cluster_id”、“num_workers”、“自動定量”) |
|
resizeResult |
[" clusterWorkers ", " clusterState ", " clusterId ", " clusterOwnerUserId ", " clusterName "] |
|
重新啟動 |
[" cluster_id "] |
|
restartResult |
[" clusterId ", " clusterState ", " clusterName ", " clusterOwnerUserId ", " clusterWorkers "] |
|
開始 |
[" init_scripts_safe_mode”、“cluster_id”) |
|
startResult |
[" clusterName ", " clusterState ", " clusterWorkers ", " clusterOwnerUserId ", " clusterId "] |
|
clusterPolicies |
創建 |
["名稱") |
編輯 |
[" policy_id”、“名稱”) |
|
刪除 |
[" policy_id "] |
|
changeClusterPolicyAcl |
[" shardName ", " targetUserId ", " resourceId ", " aclPermissionSet "] |
|
dbfs |
addBlock |
(“處理”、“data_length”) |
創建 |
(“路徑”、“bufferSize”、“覆蓋”) |
|
刪除 |
(“遞歸”、“路徑”) |
|
getSessionCredentials |
(“掛載點”) |
|
mkdir |
(“路徑”) |
|
山 |
(“掛載點”、“所有者”) |
|
移動 |
[" dst ", " source_path ", " src ", " destination_path "] |
|
把 |
(“路徑”,“覆蓋”) |
|
卸載 |
(“掛載點”) |
|
databrickssql |
addDashboardWidget |
[" dashboardId”、“widgetId”) |
cancelQueryExecution |
[" queryExecutionId "] |
|
changeWarehouseAcls |
[" aclPermissionSet ", " resourceId ", " shardName ", " targetUserId "] |
|
changePermissions |
[" granteeAndPermission”、“objectId”、“objectType”) |
|
cloneDashboard |
[" dashboardId "] |
|
commandSubmit(隻詳細審計日誌) |
[" orgId ", " sourceIpAddress ", " timestamp ", " userAgent ", " userIdentity ", " shardName "(參見細節)] |
|
commandFinish(隻詳細審計日誌) |
[" orgId ", " sourceIpAddress ", " timestamp ", " userAgent ", " userIdentity ", " shardName "(參見細節)] |
|
createAlertDestination |
[" alertDestinationId”、“alertDestinationType”) |
|
createDashboard |
[" dashboardId "] |
|
createDataPreviewDashboard |
[" dashboardId "] |
|
createWarehouse |
[" auto_resume ", " auto_stop_mins ", " channel ", " cluster_size ", " conf_pairs ", " custom_cluster_confs ", " enable_databricks_compute ", " enable_photon ", " enable_serverless_compute ", " instance_profile_arn ", " max_num_clusters ", " min_num_clusters ", " name ", " size ", " spot_instance_policy ", " tags ", " test_overrides "] |
|
createQuery |
[" queryId "] |
|
createQueryDraft |
[" queryId "] |
|
createQuerySnippet |
[" querySnippetId "] |
|
createRefreshSchedule |
[" alertId”、“dashboardId”、“refreshScheduleId”) |
|
createSampleDashboard |
[" sampleDashboardId "] |
|
createSubscription |
[" dashboardId”、“refreshScheduleId”、“subscriptionId”) |
|
createVisualization |
[" queryId”、“visualizationId”) |
|
deleteAlert |
[" alertId "] |
|
deleteAlertDestination |
[" alertDestinationId "] |
|
deleteDashboard |
[" dashboardId "] |
|
deleteDashboardWidget |
[" widgetId "] |
|
deleteWarehouse |
[" id "] |
|
deleteExternalDatasource |
[" dataSourceId "] |
|
deleteQuery |
[" queryId "] |
|
deleteQueryDraft |
[" queryId "] |
|
deleteQuerySnippet |
[" querySnippetId "] |
|
deleteRefreshSchedule |
[" alertId”、“dashboardId”、“refreshScheduleId”) |
|
deleteSubscription |
[" subscriptionId "] |
|
deleteVisualization |
[" visualizationId "] |
|
downloadQueryResult |
["文件類型”、“queryId”、“queryResultId”) |
|
editWarehouse |
[" auto_stop_mins ", " channel ", " cluster_size ", " confs ", " enable_photon ", " enable_serverless_compute ", " id ", " instance_profile_arn ", " max_num_clusters ", " min_num_clusters ", " name ", " spot_instance_policy ", " tags "] |
|
executeAdhocQuery |
[" dataSourceId "] |
|
executeSavedQuery |
[" queryId "] |
|
executeWidgetQuery |
[" widgetId "] |
|
favoriteDashboard |
[" dashboardId "] |
|
favoriteQuery |
[" queryId "] |
|
forkQuery |
[" originalQueryId”、“queryId”) |
|
listQueries |
[" filter_by ", " include_metrics ", " max_results ", " page_token "] |
|
moveDashboardToTrash |
[" dashboardId "] |
|
moveQueryToTrash |
[" queryId "] |
|
muteAlert |
[" alertId "] |
|
publishBatch |
["狀態") |
|
publishDashboardSnapshot |
[" dashboardId”、“hookId”、“subscriptionId”) |
|
restoreDashboard |
[" dashboardId "] |
|
restoreQuery |
[" queryId "] |
|
setWarehouseConfig |
[" data_access_config ", " enable_serverless_compute ", " instance_profile_arn ", " security_policy ", " serverless_agreement ", " sql_configuration_parameters ", " try_create_databricks_managed_starter_warehouse "] |
|
snapshotDashboard |
[" dashboardId "] |
|
startWarehouse |
[" id "] |
|
stopWarehouse |
[" id "] |
|
subscribeAlert |
[" alertId”、“destinationId”) |
|
transferObjectOwnership |
[" newOwner”、“objectId”、“objectType”) |
|
unfavoriteDashboard |
[" dashboardId "] |
|
unfavoriteQuery |
[" queryId "] |
|
unmuteAlert |
[" alertId "] |
|
unsubscribeAlert |
[" alertId”、“subscriberId”) |
|
updateAlert |
[" alertId”、“queryId”) |
|
updateAlertDestination |
[" alertDestinationId "] |
|
updateDashboard |
[" dashboardId "] |
|
updateDashboardWidget |
[" widgetId "] |
|
updateOrganizationSetting |
[" has_configured_data_access”、“has_explored_sql_warehouses”、“has_granted_permissions”) |
|
updateQuery |
[" queryId "] |
|
updateQueryDraft |
[" queryId "] |
|
updateQuerySnippet |
[" querySnippetId "] |
|
updateRefreshSchedule |
[" alertId”、“dashboardId”、“refreshScheduleId”) |
|
updateVisualization |
[" visualizationId "] |
|
精靈 |
databricksAccess |
[" duration ", " approver ", " reason ", " authType ", " user "] |
globalInitScripts |
創建 |
[" name ", " position ", " script-SHA256 ", " enabled "] |
更新 |
[" script_id ", " name ", " position ", " script-SHA256 ", " enabled "] |
|
刪除 |
[" script_id "] |
|
組 |
addPrincipalToGroup |
[" user_name”、“parent_name”) |
createGroup |
[" group_name "] |
|
getGroupMembers |
[" group_name "] |
|
removeGroup |
[" group_name "] |
|
iamRole |
changeIamRoleAcl |
[" targetUserId ", " shardName ", " resourceId ", " aclPermissionSet "] |
instancePools |
changeInstancePoolAcl |
[" shardName ", " resourceId ", " targetUserId ", " aclPermissionSet "] |
創建 |
[" enable_elastic_disk ", " preloaded_spark_versions ", " idle_instance_autotermination_minutes ", " instance_pool_name ", " node_type_id ", " custom_tags ", " max_capacity ", " min_idle_instances ", " aws_attributes "] |
|
刪除 |
[" instance_pool_id "] |
|
編輯 |
[" instance_pool_name ", " idle_instance_autotermination_minutes ", " min_idle_instances ", " preloaded_spark_versions ", " max_capacity ", " enable_elastic_disk ", " node_type_id ", " instance_pool_id ", " aws_attributes "] |
|
工作 |
取消 |
[" run_id "] |
cancelAllRuns |
[" job_id "] |
|
changeJobAcl |
[" shardName ", " aclPermissionSet ", " resourceId ", " targetUserId "] |
|
創建 |
[" spark_jar_task ", " email_notifications ", " notebook_task ", " spark_submit_task ", " timeout_seconds ", " libraries ", " name ", " spark_python_task ", " job_type ", " new_cluster ", " existing_cluster_id ", " max_retries ", " schedule "] |
|
刪除 |
[" job_id "] |
|
deleteRun |
[" run_id "] |
|
重置 |
[" job_id”、“new_settings”) |
|
resetJobAcl |
(“撥款”、“job_id”) |
|
runFailed |
[" jobClusterType ", " jobTriggerType ", " jobId ", " jobTaskType ", " runId ", " jobTerminalState ", " idInJob ", " orgId "] |
|
runNow |
[" notebook_params ", " job_id ", " jar_params ", " workflow_context "] |
|
runSucceeded |
[" idInJob ", " jobId ", " jobTriggerType ", " orgId ", " runId ", " jobClusterType ", " jobTaskType ", " jobTerminalState "] |
|
submitRun |
[" shell_command_task ", " run_name ", " spark_python_task ", " existing_cluster_id ", " notebook_task ", " timeout_seconds ", " libraries ", " new_cluster ", " spark_jar_task "] |
|
更新 |
[" fields_to_remove”、“job_id”、“new_settings”) |
|
mlflowExperiment |
deleteMlflowExperiment |
[" experimentId”、“路”、“experimentName”) |
moveMlflowExperiment |
[" newPath”、“experimentId”、“媒介”) |
|
restoreMlflowExperiment |
[" experimentId”、“路”、“experimentName”) |
|
mlflowModelRegistry |
listModelArtifacts |
[" name ", " version ", " path ", " page_token "] |
getModelVersionSignedDownloadUri |
["名稱”、“版本”、“路徑”) |
|
createRegisteredModel |
(“名字”、“標簽”) |
|
deleteRegisteredModel |
["名稱") |
|
renameRegisteredModel |
(“名字”,“new_name”) |
|
setRegisteredModelTag |
["名稱”、“關鍵”、“價值”) |
|
deleteRegisteredModelTag |
(“名字”,“關鍵”) |
|
createModelVersion |
[" name ", " source ", " run_id ", " tags ", " run_link "] |
|
deleteModelVersion |
(“名字”、“版本”) |
|
getModelVersionDownloadUri |
(“名字”、“版本”) |
|
setModelVersionTag |
[" name ", " version ", " key ", " value "] |
|
deleteModelVersionTag |
["名稱”、“版本”、“關鍵”) |
|
createTransitionRequest |
["名稱”、“版本”、“階段”) |
|
deleteTransitionRequest |
["名稱","版本","舞台","創作者"] |
|
approveTransitionRequest |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
rejectTransitionRequest |
["名稱”、“版本”、“階段”) |
|
transitionModelVersionStage |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
transitionModelVersionStageDatabricks |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
createComment |
(“名字”、“版本”) |
|
updateComment |
[" id "] |
|
deleteComment |
[" id "] |
|
筆記本 |
attachNotebook |
(“路徑”、“clusterId”、“notebookId”) |
createNotebook |
[" notebookId”、“路徑”) |
|
deleteFolder |
(“路徑”) |
|
deleteNotebook |
[" notebookId”、“notebookName”、“路徑”) |
|
detachNotebook |
[" notebookId”、“clusterId”、“路徑”) |
|
downloadLargeResults |
[" notebookId”、“notebookFullPath”) |
|
downloadPreviewResults |
[" notebookId”、“notebookFullPath”) |
|
importNotebook |
(“路徑”) |
|
moveNotebook |
[" newPath”、“媒介”、“notebookId”) |
|
renameNotebook |
[" newName ", " oldName ", " parentPath ", " notebookId "] |
|
restoreFolder |
(“路徑”) |
|
restoreNotebook |
(“路徑”、“notebookId”、“notebookName”) |
|
runCommand(隻詳細審計日誌) |
[" notebookId ", " executionTime ", " status ", " commandId ", " commandText "(見細節)] |
|
takeNotebookSnapshot |
(“路徑”) |
|
回購 |
createRepo |
[" url”、“提供者”、“路徑”) |
updateRepo |
[" id ",“分支”,“標簽”,“git_url”,“git_provider”) |
|
getRepo |
[" id "] |
|
listRepos |
[" path_prefix”、“next_page_token”) |
|
deleteRepo |
[" id "] |
|
拉 |
[" id "] |
|
commitAndPush |
[" id ", " message ", " files ", " checkSensitiveToken "] |
|
checkoutBranch |
[" id ",“分支”] |
|
丟棄 |
[" id ",“file_paths”] |
|
秘密 |
createScope |
["範圍"] |
deleteScope |
["範圍"] |
|
deleteSecret |
(“關鍵”、“範圍”) |
|
getSecret |
(“範圍”、“關鍵”) |
|
listAcls |
["範圍"] |
|
listSecrets |
["範圍"] |
|
putSecret |
[" string_value”、“範圍”、“關鍵”) |
|
sqlanalytics |
createEndpoint |
|
startEndpoint |
||
stopEndpoint |
||
deleteEndpoint |
||
editEndpoint |
||
changeEndpointAcls |
||
setEndpointConfig |
||
createQuery |
[" queryId "] |
|
updateQuery |
[" queryId "] |
|
forkQuery |
[" queryId”、“originalQueryId”) |
|
moveQueryToTrash |
[" queryId "] |
|
deleteQuery |
[" queryId "] |
|
restoreQuery |
[" queryId "] |
|
createDashboard |
[" dashboardId "] |
|
updateDashboard |
[" dashboardId "] |
|
moveDashboardToTrash |
[" dashboardId "] |
|
deleteDashboard |
[" dashboardId "] |
|
restoreDashboard |
[" dashboardId "] |
|
createAlert |
[" alertId”、“queryId”) |
|
updateAlert |
[" alertId”、“queryId”) |
|
deleteAlert |
[" alertId "] |
|
createVisualization |
[" visualizationId”、“queryId”) |
|
updateVisualization |
[" visualizationId "] |
|
deleteVisualization |
[" visualizationId "] |
|
changePermissions |
[" objectType”、“objectId”、“granteeAndPermission”) |
|
createAlertDestination |
[" alertDestinationId”、“alertDestinationType”) |
|
updateAlertDestination |
[" alertDestinationId "] |
|
deleteAlertDestination |
[" alertDestinationId "] |
|
createQuerySnippet |
[" querySnippetId "] |
|
updateQuerySnippet |
[" querySnippetId "] |
|
deleteQuerySnippet |
[" querySnippetId "] |
|
downloadQueryResult |
[" queryId”、“queryResultId”、“文件類型”) |
|
sqlPermissions |
createSecurable |
(“可獲得的”) |
grantPermission |
(“許可”) |
|
removeAllPermissions |
(“可獲得的”) |
|
requestPermissions |
["請求"] |
|
revokePermission |
(“許可”) |
|
showPermissions |
["可到手的”、“主要”) |
|
ssh |
登錄 |
[" containerId ", " userName ", " port ", " publicKey ", " instanceId "] |
注銷 |
["用戶名”、“containerId”、“instanceId”) |
|
工作空間 |
changeWorkspaceAcl |
[" shardName ", " targetUserId ", " aclPermissionSet ", " resourceId "] |
fileCreate |
(“路徑”) |
|
fileDelete |
(“路徑”) |
|
moveWorkspaceNode |
[" destinationPath”、“路徑”) |
|
purgeWorkspaceNodes |
[" treestoreId "] |
|
workspaceConfEdit |
[" workspaceConfKeys(值:enableResultsDownloading, enableExportNotebook) ", " workspaceConfValues "] |
|
workspaceExport |
[" workspaceExportFormat”、“notebookFullPath”) |
帳號級審計日誌事件
服務 |
行動 |
請求參數 |
---|---|---|
accountBillableUsage |
getAggregatedUsage |
[" account_id ", " window_size ", " start_time ", " end_time ", " meter_name ", " workspace_ids_filter "] |
getDetailedUsage |
[" account_id”、“start_month”、“end_month”、“with_pii”) |
|
賬戶 |
登錄 |
(“用戶”) |
gcpWorkspaceBrowserLogin |
(“用戶”) |
|
注銷 |
(“用戶”) |
|
accountsManager |
updateAccount |
[" account_id”、“賬戶”) |
changeAccountOwner |
[" account_id”、“first_name”、“last_name”,“電子郵件”) |
|
updateSubscription |
[" account_id”、“subscription_id”、“訂閱”) |
|
listSubscriptions |
[" account_id "] |
|
createWorkspaceConfiguration |
(“工作區”) |
|
getWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
listWorkspaceConfigurations |
[" account_id "] |
|
updateWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
deleteWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
listWorkspaceEncryptionKeyRecords |
[" account_id”、“workspace_id”) |
|
listWorkspaceEncryptionKeyRecordsForAccount |
[" account_id "] |
|
createVpcEndpoint |
[" vpc_endpoint "] |
|
getVpcEndpoint |
[" account_id”、“vpc_endpoint_id”) |
|
listVpcEndpoints |
[" account_id "] |
|
deleteVpcEndpoint |
[" account_id”、“vpc_endpoint_id”) |
|
createPrivateAccessSettings |
[" private_access_settings "] |
|
getPrivateAccessSettings |
[" account_id”、“private_access_settings_id”) |
|
listPrivateAccessSettingss |
[" account_id "] |
|
deletePrivateAccessSettings |
[" account_id”、“private_access_settings_id”) |
|
logDelivery |
createLogDeliveryConfiguration |
[" account_id”、“config_id”) |
updateLogDeliveryConfiguration |
[" config_id”、“account_id”、“地位”) |
|
getLogDeliveryConfiguration |
[" log_delivery_configuration "] |
|
listLogDeliveryConfigurations |
[" account_id ", " storage_configuration_id ", " credentials_id ", " status "] |
|
ssoConfigBackend |
創建 |
[" account_id”、“sso_type”、“配置”) |
更新 |
[" account_id”、“sso_type”、“配置”) |
|
得到 |
[" account_id”、“sso_type”) |
分析審計日誌
您可以使用Databricks分析審計日誌。下麵的示例使用日誌報告Databricks訪問和Apache Spark版本。
將審計日誌加載為DataFrame,並將DataFrame注冊為臨時表。
瓦爾df=火花.讀.格式(“json”).負載(“gs: / / bucketName /道路/ /你/審計日誌”)df.createOrReplaceTempView(“audit_logs”)
列出訪問Databricks的用戶及其位置。
%sql選擇截然不同的userIdentity.電子郵件,sourceIPAddress從audit_logs在哪裏名=“賬戶”和actionName就像登錄“% %”
檢查使用的Apache Spark版本。
%sql選擇requestParams.spark_version,數(*)從audit_logs在哪裏名=“集群”和actionName=“創造”集團通過requestParams.spark_version
檢查表數據訪問。
%sql選擇*從audit_logs在哪裏名=“sqlPermissions”和actionName=“requestPermissions”