Accessing Azure Data Lake Storage Gen1 from Databricks
Microsoft has announced the planned retirement ofAzure Data Lake Storage Gen1(formerly Azure Data Lake Store, also known as ADLS) and recommends all users migrate toAzure Data Lake Storage Gen2. Databricks recommends upgrading to Azure Data Lake Storage Gen2 for best performance and new features.
You can access Azure Data Lake Storage Gen1 directly using a service principal.
Create and grant permissions to service principal
If your selected access method requires a service principal with adequate permissions, and you do not have one, follow these steps:
Create an Azure AD application and service principal that can access resources. Note the following properties:
application-id
: An ID that uniquely identifies the client application.directory-id
: An ID that uniquely identifies the Azure AD instance.service-credential
: A string that the application uses to prove its identity.
Register the service principal, granting the correctrole assignment, such as Contributor, on the Azure Data Lake Storage Gen1 account.
Access directly with Spark APIs using a service principal and OAuth 2.0
To read from your Azure Data Lake Storage Gen1 account, you can configure Spark to use service credentials with the following snippet in your notebook:
spark.conf.set("fs.adl.oauth2.access.token.provider.type","ClientCredential")spark.conf.set("fs.adl.oauth2.client.id","" )spark.conf.set("fs.adl.oauth2.credential",dbutils.secrets.get(scope="" ,key="" ))spark.conf.set(“fs.adl.oauth2.refresh.url”,"https://login.microsoftonline.com//oauth2/token" )
where
dbutils.secrets.get(scope="
retrieves your storage account access key that has been stored as asecretin asecret scope.", key="")
After you’ve set up your credentials, you can use standard Spark and Databricks APIs to access the resources. For example:
valdf=spark.read.format("parquet").load("adl://.azuredatalakestore.net/" )dbutils.fs.ls("adl://.azuredatalakestore.net/" )
Azure Data Lake Storage Gen1 provides directory level access control, so the service principal must have access to the directories that you want to read from as well as the Azure Data Lake Storage Gen1 resource.
Access through metastore
To accessadl://
中指定的位置astore, you must specify Hadoop credential configuration options as Spark options when you create the cluster by adding thespark.hadoop.
prefix to the corresponding Hadoop configuration keys to propagate them to the Hadoop configurations used by the metastore:
spark.hadoop.fs.adl.oauth2.access.token.provider.type ClientCredentialspark.hadoop.fs.adl.oauth2.client.id spark.hadoop.fs.adl.oauth2.credential spark.hadoop.fs.adl.oauth2.refresh.url https://login.microsoftonline.com//oauth2/token
Warning
These credentials are available to all users who access the cluster.
Mount Azure Data Lake Storage Gen1 resource or folder
To mount an Azure Data Lake Storage Gen1 resource or a folder inside it, use the following command:
configs={"fs.adl.oauth2.access.token.provider.type":"ClientCredential","fs.adl.oauth2.client.id":"" ,"fs.adl.oauth2.credential":dbutils.secrets.get(scope="" ,key="" ),“fs.adl.oauth2.refresh.url”:"https://login.microsoftonline.com//oauth2/token" }# Optionally, you can add to the source URI of your mount point. dbutils.fs.mount(source="adl://.azuredatalakestore.net/" ,mount_point="/mnt/" ,extra_configs=configs)
valconfigs=Map("fs.adl.oauth2.access.token.provider.type"->"ClientCredential","fs.adl.oauth2.client.id"->"" ,"fs.adl.oauth2.credential"->dbutils.secrets.get(scope="" ,key="" ),“fs.adl.oauth2.refresh.url”->"https://login.microsoftonline.com//oauth2/token" )// Optionally, you can add to the source URI of your mount point. dbutils.fs.mount(source="adl://.azuredatalakestore.net/" ,mountPoint="/mnt/" ,extraConfigs=configs)
where
is a DBFS path that represents where the Azure Data Lake Storage Gen1 account or a folder inside it (specified insource
) will be mounted in DBFS.dbutils.secrets.get(scope="
retrieves your storage account access key that has been stored as asecretin asecret scope.", key="")
Access files in your container as if they were local files, for example:
df=spark.read.format("text").load("/mnt//...." )df=spark.read.format("text").load("dbfs:/mnt//...." )
valdf=spark.read.format("text").load("/mnt//...." )valdf=spark.read.format("text").load("dbfs:/mnt//...." )
Set up service credentials for multiple accounts
You can set up service credentials for multiple Azure Data Lake Storage Gen1 accounts for use within in a single Spark session by addingaccount.
to the configuration keys. For example, if you want to set up credentials for both the accounts to accessadl://example1.azuredatalakestore.net
andadl://example2.azuredatalakestore.net
, you can do this as follows:
spark.conf.set("fs.adl.oauth2.access.token.provider.type","ClientCredential")spark.conf.set("fs.adl.account.example1.oauth2.client.id","" )spark.conf.set("fs.adl.account.example1.oauth2.credential",dbutils.secrets.get(scope="" ,key="" ))spark.conf.set("fs.adl.account.example1.oauth2.refresh.url","https://login.microsoftonline.com//oauth2/token" )spark.conf.set("fs.adl.account.example2.oauth2.client.id","" )spark.conf.set("fs.adl.account.example2.oauth2.credential",dbutils.secrets.get(scope="" ,key="" ))spark.conf.set("fs.adl.account.example2.oauth2.refresh.url","https://login.microsoftonline.com//oauth2/token" )
This also works for the clusterAWS configurations:
spark.hadoop.fs.adl.oauth2.access.token.provider.type ClientCredentialspark.hadoop.fs.adl.account.example1.oauth2.client.id spark.hadoop.fs.adl.account.example1.oauth2.credential spark.hadoop.fs.adl.account.example1.oauth2.refresh.url https://login.microsoftonline.com//oauth2/token spark.hadoop.fs.adl.account.example2.oauth2.client.id spark.hadoop.fs.adl.account.example2.oauth2.credential spark.hadoop.fs.adl.account.example2.oauth2.refresh.url https://login.microsoftonline.com//oauth2/token
The following notebook demonstrates how to access Azure Data Lake Storage Gen1 directly and with a mount.