3.8 Insider Protection
Insider Protection is an additional protection mechanism for Veeam Cloud Connect backups, able to guarantee the safety of backups stored in Cloud Connect against mis-deletions or attacks against the tenant’s Veeam environment where remote data stored into Veeam Cloud Connect are deleted on purpose, for example during Ransomware attacks to prevent the victim to avoid paying the ransom by restoring data from a backup.
In fact, because Veeam Cloud Connect is connected to the tenant’s Veeam console, every backup stored there is always available to the end user. This is the advantage for a legitimate usage of the solution, but it also allows an attacker that takes control of the console to delete those remote backups:
3.18: Backups stored in Cloud Connect can be deleted from the tenant’s console
In this case, Veeam Cloud Connect cannot distinguish a legitimate user from an attacker; what it’s seen is an account with the correct credentials requesting a deletion. And in this case, Cloud Connect obeys to the command.
However, there’s a solution to protect from this scenario, and this is as said Insider Protection. In essence, you can think about it as the Recycle Bin of Veeam Cloud Connect. It is not enabled by default, but it can be enabled specifically for any given tenant:
3.19: Enable IP for a tenant in Cloud Connect
Once IP is enabled, every file that is deleted from the tenant’s repository is not immediately deleted, but rather moved into a special folder of the Backup Repository:
3.20: The folder structure of IP
As you may notice in this screenshot, the recycle bin contains only incremental files, and this is due to the type of backup or backup copy job that has been sent to Cloud Connect. IP “just” saves in the bin any file that is deleted on purpose or by the configured job retention. But in order to have a successful restore, a full file has to be available to properly initiate the backup chain. For this reason, tenants need to configure their backup jobs to have periodic full files in the bin, otherwise an attack will still be successful.
The backup copy job of the previous example uses with regular retention, configured as in the screenshot below:
3.21: Backup Copy regular retention
This backup copy job stores 7 restore points in Cloud Connect. When retention is reached, the older restore points are deleted, and stored for additional 14 days in the Recycle Bin. The problem is that, by the nature of this type of job, only ONE Full file is stored at any time in the Cloud Repository, and is only available in the primary storage. When fully utilized, the file chain would be like this:
3.22: Regular chain inside VCC with IP enabled
You can immediately visualize where the problem lies. A primary backup job with Forever Forward incremental or a regular Backup Copy job will only have one full backup, that will always be updated by merging old incrementals. For this reason this file will never be placed in the Recycle Bin and thus, in case of a complete deletion of all the files in the Cloud Repository, there will be no full file in the Recycle Bin.
For this reason, in order to make the Recycle Bin really effective, tenants need to use a backup mode that will periodically create Full files that will be “sealed”, that is not touched anymore once created. This will allow them to age, be deleted by job retention and thus placed in the Recycle Bin where they will be ultimately protected.
Veeam console at the tenant’s side will also warn if a job without periodic fulls has been configured when IP is enabled on the selected Cloud Repository:
3.23: IP job type warning
The complete text says: “Your service provider has implemented backup files protection against deletion by an insider for this cloud repository. To protect against advanced attack vectors, we recommend that you configure your cloud backup jobs to keep multiple full backups on disk (as opposed to forever-incremental chain with a single full backup).”
Tenants can for example reconfigure a Backup Copy job by also enabling GFS retention, like this:
3.24: Backup Copy GFS retention
As soon as some full files are created and deleted, this is how it will look in the file system:
3.25: The folder structure of IP with GFS
I’ve hidden the moltitude of .vib files from the screenshot, but you can clearly see that now we have one .vbk file in the Recycle Bin, that can be used when a restore is requested by a tenant. We can represent this situation with this scheme:
3.26: GFS chain inside VCC with IP enabled
Even if the entire Cloud repository is deleted, the service provider has now one consistent backup file to help his tenant to restore his data.
Insider Protection and Capacity Tier
You may have noticed in the picture 3.25 that the .vbk file stored in the Recycle Bin has a really small size. Instead of being some GB like its siblings stored in the production area, its size is just a few MB. And even if the file name states that it has been created on 2018-12-03, the last modified date is 2018-12-14.
How is it possible?
This is because the SOBR group where the backups are stored also has Capacity Tier enabled, as explain in chapter 2.8. This creates an interesting and powerful interaction between the two features, that we will explore in this section using a real example.
Our Repository is built using a Performance Tier and a Capacity Tier. The tiering option is configured to move data to the Capacity Tier after 15 days.
Then, the service provider configures Insider Protection for the tenant for 60 days of protection.
Finally, a backup copy job with GFS retention is created by the tenant, and it is configured as in picture 3.24.
As a recap:
|Backup Copy||7 retention points + 2 Weekly fulls|
|Capacity Tier||Moves data after 15 days|
|Insider Protection||Keeps data for 60 days|
Let’s ignore the incremental .vib files, as they are not consistent without a full to guarantee a restore, and let’s focus on the full files. With this job configuration, at any point in time we will have 3 files: 1 for the regular retention, and 2 for the weekly fulls:
3.27: IP + Capacity Tier after 14 days
At the 15th day, Capacity Tier will kick in and it will move the oldest full file to the Capacity Tier, so that its blocks will be stored in the object storage, while a dehydrated version of the vbk file will be stored locally:
3.28: IP + Capacity Tier after 15 days
At day 21, a new weekly full will be created, and because the GFS configuration is set to hold 2 weekly full, the oldest VBK file will be deleted by the job. This file however is only 21 days old, so it will be intercepted by Insider Protection and hold for other 60 days.
But what is exactly intercepted and moved?
The file that will be moved is only the dehydrated VBK file, while the blocks tiered to the Capacity tier belonging to it will stay in place. In this way, no download will happen from the Capacity tier to SOBR, and this is especially helpful when the Capacity Tier is built using public cloud services that bill for downloads.
3.28: IP + Capacity Tier after 21 days
Once the retention is reached also for the Insider Protection, finally the dehydrated VBK file is deleted, and if the blocks stored in the Capacity tier are not belonging to any dehydrated file, they will be deleted too.
As a recap, we have two possible scenarios:
|Job retention kicks in before Capacity tier||File is moved directly to LOCAL IP|
|Job retention kicks in after Capacity Tier||File is dehydrated and moved to Capacity Tier - Only dehydrated vbk is moved to local IP - Blocks stay in REMOTE IP / Capacity Tier|
TIP: in order to use the Capacity Tier as the designated Recycle Bin, providers should configure the option in SOBR in order to have Capacity Tier always kicking in before the job retention.
In chapter 7.1 we will discuss how service providers can restore data from Insider Protection, both locally and from the Capacity Tier.