Storage accounts contain data objects such as blobs, files, queues, tables and disks. By default, these are accessible from anywhere over http and https. Although there are some layers of security enabled, it’s your responsibility to enable the rest.
There are multiple data objects available so make sure that you chose one that fits the need. Click here.
Below are a few security controls you can enable to help keep your data objects secure.
The naming of the account is somewhat important. There are several “Tools” out there that scan for key words. The reason why these tools are so successful is because cloud vendors like Microsoft and AWS follow a standard. For Azure, it’s as followed:
- Blob storage: http://mystorageaccount.blob.core.windows.net
- Table storage: http://mystorageaccount.table.core.windows.net
- Queue storage: http://mystorageaccount.queue.core.windows.net
- Azure Files: http://mystorageaccount.file.core.windows.net
This makes life easier for you and the attackers.
You don’t need to go overboard but do consider how you are accessing it. If it’s mapped through automation and requires no user input, having it set to random characters would not hinder access. It would also not reveal anything to the attacker.
The tools out there at the minute mainly scan for keywords and domains. I would recommend you avoid using your registered domains or keywords such as file, department, share, finance, payment, archive and sensitive when naming your storage account. These will look somewhat inviting.
Although this doesn’t play a bit part in “data security”, it doesn’t play a massive part in data privacy, protection and compliance laws/regulations. I won’t go into detail here as your compliance team will need to be heavily involved. Where you store your data should be defiantly be considered though. You don’t want a hefty fine through the post.
Access Control (IAM)
This should be the first place you look before moving or creating any data objects. Access should only be provisioned to those who need it. There are already pre-configured roles which will help you apply the correct access. This will need to be controlled at the resource group as well. This is the parent of the storage account.
Remember that this is RBAC are not linked to Roles within (Azure AD).
At the storage account level, the built-in roles are as followed:
Storage Blob Data
Use to set ownership and manage POSIX access control for Azure Data Lake Storage Gen2. For more information, see Access control in Azure Data Lake Storage Gen2.
Storage Blob Data
Use to grant read/write/delete permissions to Blob storage resources.
Storage Blob Data
Use to grant read-only permissions to Blob storage resources.
Storage Queue Data
Use to grant read/write/delete permissions to Azure queues.
Storage Queue Data
Use to grant read-only permissions to Azure queues.
Storage Queue Data
Use to grant peek, retrieve, and delete permissions to messages in Azure Storage queues.
Storage Queue Data
Use to grant add permissions to messages in Azure Storage queues.
Before you add groups or users to these, it’s important to filter out permissions and have a design. Remember that you can control IAM access at the Subscription, Resource group, Storage account or the container/queue level. Each level has different roles so it’s worth reviewing. The diagram below shows the basic hierarchy:
When creating your storage account, you will notice that the default is Public endpoint (all networks). This means it is reachable by all over the internet. This setting is applied to both management (Portal) and the data access level.
This is something that I would suggest you avoid. You can leave this setting during creation as it gives less options. If you do leave it, make sure you remember to go back to the setting. You will find these under Firewalls and Virtual Networks.
If you open up this tab, you will see that you can select ‘selected networks’. This is where you will want to involve both Security and Networking as they are broken up into two sections.
The first part is your virtual network and is the route your traffic takes internally. This rule will contain private IPs and any provisioned virtual networks. At this level you will be focused on what network your data is accessed (not managed).
The next section is how you access and manage the storage account over the internet. Remember, if you only do the private network, you won’t be able to manage through the portal as it will be blocked. You will have to be on a server or routed through the virtual networks you’ve enabled.
If you don’t have a tunnel setup, I would recommend lifting only your office or data centre IPs. If following a security first model, you will most likely have dedicated servers in which you manage your Azure environment. Because of this, the source IP (Public) should be static and can therefore be locked down.
This would be the same if you have a vendor or third party managing your cloud. Lock it down to their Public IP so that you don’t have their staff accessing your platform from everywhere.
If you don’t have conditional access or these controls in place, anyone can attempt to access the data on any device (Android, MAC, Windows XP) from anywhere (Costa, McDonalds, Airports). Places which may not have secured network connectivity.
If you want to allow remote users to manage your cloud, you may have a harder time. If you have an SDP or VPN which routes their traffic through the data center, you can simply apply the model above. If not, you may find yourselves forever adding IPs to the list. It’s a balance of accessibility and security.
The last section could remove the controls you have placed above so it’s best to have confidence in your model before enabling.
The trusted Microsoft services can be found here.
You shouldn’t have to worry about this as it’s enabled by default. Might be worth checking it though as there is an option to disable.
Tagging is a great way to keep an inventory of records. If you have security reports or monitoring that target certain tags, you want to ensure that you are tagging resource correctly. If not, the monitoring becomes somewhat flawed.
Having a really strong tagging system at the start will really pay off later down the line. If you’ve chosen to obscure your naming convention, you could apply a helpful tag which identities the service.
Access keys provide an easy and secure way to access your data objects. Although the method is secure, it can quickly become your worst nightmare.
Attackers are scanning for these keys. The reason why is because they allow you to access the data with a single string. It’s convenient for both you and malicious parties. If you have Azure Storage Explorer, you can use these keys and connection strings to get direct access to the data objects.
At no point during this connection does explorer prompt you for credentials. This is why it’s important to keep these hidden. Make sure developers or cloud engineers are not storing these strings in plain text files or just storing them in publicly accessible folders. If you do feel like the keys are compromised, you can quickly regenerate the keys by clicking here:
Because they provide you with two, it might be worth designing a process around them. Perhaps Key1 is used for critical access and has a regeneration process in place (Slow), whilst Key2 is used for common access and be regenerated at any time (Quick). Maybe Key2 is used for public access such as clients over the internet.
Depending if you have Azure Domain Service setup, you might want to consider enabling Identify Based SD. This is under the configuration tab:
Below is a simple diagram to show how it works.
There are a lot of benefits to it so should you wanted to read more, click here.
Although encryption is handled automatically, you might want to add another layer to it. By default, Microsoft encrypts your data with Microsofts own managed keys. Although the risk is low, it does mean that Microsoft can decrypt your files. After all, you are using their keys. Therefore, it’s worth considering managing your own access keys for your storage accounts and Azure resources.
The reason why, is because it would put you fully in control of your data. Even if a law enforcement or Microsoft employee tried to access your data through the traditional means, they would not be able to decrypt the data. They would in theory have to come to you first. This is why, most companies have a KMS/HSM in place when using cloud services. The cloud provider may have their data, but they can’t read it because they can’t decrypt it.
The caveat to this is that you must managed them well. If you lose them or something goes wrong, the responsibility sits with you. Microsoft may not be able to recover them.
Shared Access Signatures (SAS)
Much like the access keys, these strings need to be hidden and secured. SAS does however allow you to share with more granular conditions. Restricting based on IP and timeframes for example. It also allows you to set the privilege. Remember that by default, you hand over similar access to root so make sure you check what is enabled before sharing.
This only applies if you have Security Center running as your protection and monitoring service. If you don’t use the service at all, then you can somewhat ignore. I would however recommend following any advisory or recommendation that Microsoft generates.
Although more of a compliance measure it does somewhat fall under Security. Enabling locks can prevent malicious activity such as deleting or modifying critical resource. It can also help enforce certain privilege such as only wanting ‘Read only’ access. For more information, click here.
As with anything, always check that alerts are functioning. You want as many eyes on as you can and having the correct notification, auditing and monitoring services enabled will help during those times of need.