Professional Azure SQL Database Administration
上QQ阅读APP看书,第一时间看更新

DTU Pricing Models

Database Transaction Units

The amount of resources (CPUs, I/O, and RAM) to be assigned to an Azure SQL Database in a particular service tier is calculated in Database Transaction Units (DTUs).

DTUs guarantee that an Azure SQL Database will always have a certain amount of resources and a certain level of performance (offered under a particular DTU model) at any given point of time, independent of other SQL databases on the same Azure SQL server or across Microsoft Azure.

The ratio for the aforementioned resources was calculated by Microsoft by running an Online Transaction Processing (OLTP) benchmark.

The DTU amount signifies how powerful an Azure SQL Database is. For example, if a workload of, say, 5 queries takes 80 seconds on the Basic tier with 5 DTUs, then it'll take around 4 seconds on the Standard S3 tier with 100 DTUs.

There are four pricing tiers available in the DTU-based pricing model:

Note

The DTU is the measure of Azure SQL Database performance. This is discussed later in the lesson.

Figure 2.2: The DTU-based pricing model

  • Basic service tier: The Basic tier is the lowest tier available and is applicable to small, infrequently used applications, usually supporting one single active transaction at any given point in time.

    The Basic tier has a size limit of 2 GB, a performance limit of 5 DTUs, and costs $5/month:

Figure 2.3: Performance statistics of the Basic tier

  • Standard service tier: This is the most commonly used service tier and is best for web applications or workgroups with low to medium I/O performance requirements. Unlike the Basic service tier, it has four different performance levels: S0, S1, S2, and S3. Each performance level offers the same size (250 GB); however, they differ in terms of DTUs and cost. S0, S1, S2, and S3 offer 10, 20, 50, and 100 DTUs, and cost $15, $30, $75, and $150 per month, respectively:

Figure 2.4: Standard service tier

  • Premium service tier: The Premium service tier is used for mission-critical, high-transaction-volume applications. It supports a large number of concurrent users and has a high I/O performance compared to the Basic and Standard service tiers.

    It has six different performance levels: P1, P2, P4, P6, P11, and P15. Each performance level offers different sizes and DTUs. P1, P2, P4, P6, P11, and P15 are priced at $465, $930, $1,860, $3,720, $7,001, and $16,003 per month, respectively:

Figure 2.5: Premium service tier

  • Premium RS service tier: The Premium RS service tier is used for low-availability, I/O-intensive workloads such as analytical workloads. It has four different performance levels: PRS1, PRS2, PRS3, and PRS4, which are priced at $116, $232, $465, and $930 per month, respectively:

Figure 2.6: Premium RS service tier

Note

The prices listed here are for a single database and not for an elastic pool.

The Premium service tier supports read scale-out and zone redundancy.

Read scale-out, when enabled, routes read queries to a read-only secondary replica. The read-only secondary is of the same compute and storage capacity as the primary replica.

An Availability Zone in an Azure region is an isolated datacenter building. There can be more than one Availability Zone in an Azure region. When opting for the Premium service tier, you can choose the Azure SQL Database to be zone-redundant. This will ensure that a copy of the database is available in another zone within the same region to facilitate high availability.

The zone-redundancy feature is available for databases up to 1 TB in size.

vCore Pricing Model

Unlike DTU-based pricing models, where compute and storage are governed by the DTU size, the vCore (virtual core) pricing tiers allow you to independently define and control the compute and storage based on the workload of your on-premises SQL Server infrastructure.

Note

Compute = vCore + Memory

As mentioned earlier in Lesson 1, Microsoft Azure SQL Database Primer, in the Introduction to Managed Instance section, the vCore pricing model supports two hardware generations.

Hardware Generations

Gen4 offers up 24 logical CPUs, based on Intel E5-2673 v3 (Haswell) and 2.4 GHz processors, with 7 GB memory per core and attached SSD. Gen4 offers more memory per core.

Gen5 offers up to 80 logical CPUs, based on Intel E5-2573 v4 (Broadwell) and 2.3 GHz processors, with 5.1 GB per core and fast eNVM SSD. Gen5 offers more compute scalability with 80 logical CPUs.

vCore Service Tiers

There are three service tiers available with the vCore pricing model: General Purpose, Business Critical, and Hyperscale (in preview):

Figure 2.7: Service tier feature table

Note

To find out more about the vCore resource limits for a single database, visit https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vcore-resource-limits-single-databases.

Azure Hybrid Benefit

Azure Hybrid Benefit for SQL Server allows you to use existing on-premises SQL Server Standard or Enterprise licenses with software assurance for discounted rates when migrating to Azure SQL Database with a vCore-based pricing tier.

Azure Hybrid Benefit provides an up to 30% cost saving for an existing on-premises license with software assurance.

When configuring a vCore-based service tier, there are two license types available:

  • BasePrice, which provides discounted rates of existing on-premises SQL Server licenses. You only pay for the underlying Azure infrastructure. This is best to opt for when migrating an on-premises database to Azure SQL Database.
  • LicenseIncluded, which includes the cost of the SQL Server license and Azure infrastructure.

Hyperscale Service Tier

Hyperscale decouples the compute, storage, and log into microservices to provide a highly scalable and highly available service tier.

A traditional database server, as shown in the following diagram, consists of compute (CPU and memory) and storage (data files and log files):

Figure 2.8: Database server architecture

A SQL Server engine is run by three main components: the query processor, the storage engine, and the SQL operating system.

The query processor does query parsing, optimization, and execution.

The storage engine serves the data required by the queries and manages the data and log files.

The SQL operating system is an abstraction over the Windows/Linux operating system that is mainly responsible for task scheduling and memory management.

Hyperscale takes out the storage engine from the database server and splits it into independent scale-out sets of components, page servers, and a log service, as shown in the following diagram.

Comparing it with the traditional database server, observe that the data and log files are no longer part of the database server.

Figure 2.9: Architecture of Hyperscale

A detailed architecture diagram for Hyperscale is shown here:

Figure 2.10: Detailed architecture of Hyperscale

The different Hyperscale components are explained here:

  • Compute nodes: A compute node is a SQL server without the data files and the log files. Compute nodes are similar to the SQL Server query processor, responsible for query parsing, optimization, and execution. Users and applications connect and interact with the compute nodes.

    Each compute node has a local data cache, a Resilient Buffer Pool Extension (RBPEX).

    Note

    The RBPEX is a SQL Server feature that allows SSDs to be used as an extension to the buffer pool (server memory, or RAM). With an RBPEX, data can be cached to extended buffers (SSDs), thereby decreasing physical disk reads and increasing I/O throughput.

    The primary compute node takes user and application transactions and writes them to the log service landing zone. If the data requested by a query isn't available in the primary node's RBPEX, it gets the missing data from the page servers.

    The secondary compute nodes are used to offload reads from the primary compute node. The Hyperscale tier offers four secondary replicas for read scale-out, high availability, and disaster recovery. Each replica has the same vCore model as the primary replica and is charged separately.

    You connect to a secondary replica by specifying ApplicationIntent as ReadOnly in the connection string.

    Each secondary replica, similar to the case with the primary node, has a local cache (RBPEX). When a read request is received by a secondary replica, it first checks for the data in the local cache. If the data is not found in the local cache, the secondary replica gets the data from the page servers.

    When the primary compute node goes down, a failover happens to a secondary node, and one of the secondary nodes promotes itself to a primary node and starts accepting read-write transactions.

    A replacement secondary node is provisioned and warms up. It is cached in the background without affecting the performance of any of the other compute nodes.

    No action needs to be taken at the storage level as the compute nodes are separated from the storage. This is contrary to the case in the regular SQL Server architecture, where a database serves hosts the SQL Server engine and the storage (as explained earlier in this section). If the database server goes down, the storage (that is, the data files and the log files) also goes down.

  • Page server node: The page server node is where the database data files are. Each page server node manages 1 TB of data and represents one data file. The data from each page server node is persisted on Azure standard storage. This makes it possible to rebuild a page server from the data in the Azure standard storage in case of a failure. Therefore, there's no loss of data. The page servers get the data modifications from the log service and apply them to the data files. Each page server node has its own local cache (RPBEX). The data is fully cached in the page server local cache to avoid any data requests being forwarded to the Azure standard storage. A database can have one or more pages of server nodes depending on its size. As the database grows in size, a new page server is automatically added if the existing page server is 80% full. Hyperscale, for now, supports databases up to 100 TB in size.
  • Log service node: The log service node is the new transaction log and is again separated from the compute nodes. The log service node gets the log records from the primary node, in the landing zone, which is Azure premium storage. Azure premium storage has built-in high availability, which prevents the loss of any log records. It persists the log records from the landing zone to a durable log cache. It also forwards the log records to the secondary compute nodes and the page server nodes. It writes the log records to long-term log storage, which is Azure standard storage. The long-term log storage is used for point-in-time recovery. When the log records are written to long-term storage, they are deleted from the landing zone to free up space.

    The log records are kept in long-term log storage for the duration of the backup retention period that has been configured for the database. No transaction log backups are needed.

    There's no hot standby for a log service node because it's not required. The log records are persisted first in Azure premium storage, which has its own high availability provision, and then in Azure standard storage.

Hyperscale, with this improved architecture, offers the following benefits:

  • Nearly instantaneous backups. A backup is taken by taking a snapshot of the file in Azure standard storage. The snapshot process is fast and takes less than 10 minutes to back up a 50 TB database.
  • Similar to database backups, database restores are also based on file snapshots and are a lot faster than in any other performance tier.
  • Higher log throughput and faster transaction commits regardless of data volumes.
  • The primary replica does not need to wait for an acknowledgement-of-transaction commit from the secondary replica. This is because the transaction log is managed by a log service.
  • Supports up to 100 TB of database size.
  • Rapid read scale-out by creating read replicas.

Exercise: Provisioning a Hyperscale SQL Database Using PowerShell

  1. Open a new PowerShell console window, change the working directory to C:\Code\Lesson1. Enter and execute the following PowerShell command:

    .\ProvisionAzureSQLDatabase.ps1 -ResourceGroup RGPackt -Location "East US 2" -SQLServer sshsserver -SQLDatabase toystore -Edition Hyperscale

    The preceding command calls the ProvisionAzureSQLDatabase.ps1 script to provision a new Hyperscale SQL database, toystore.

    Note

    Change the SQLServer and SQLDatabase parameter values to avoid getting a Server/Database already exists error.

  2. Once the script completes, log in to the Azure portal and click All resources in the left navigation pane.
  3. Click toystore to open the details window:

Figure 2.11: The All resources panel

The pricing tier is Hyperscale, Gen4, 1 vCore:

Figure 2.12: Configure pane in the toystore SQL database

Scaling up the Azure SQL Database Service Tier

In this section, we'll learn how to scale up the Azure SQL Database service tier for better performance. Let's go back to our previous example of Mike, who observes that there is an increase in the load on the Azure SQL database. To overcome this problem, he plans to change the service tier for the database so that it can handle the overload. This can be achieved via the following steps:

  1. Open a new PowerShell console. In the PowerShell console, execute the following command to create a new Azure SQL Database from a bacpac file:

    C:\Code\Lesson02\ImportAzureSQLDB.ps1

  2. Provide the Azure SQL server name, SQL database name, Azure SQL Server administrator user and password, bacpac file path, and sqlpackage.exe path, as shown in the following screenshot:

    Figure 2.13: The Windows PowerShell window

    The script will use sqlpackage.exe to import the bacpac file as a new SQL Database on the given Azure SQL Server. The database is created in the Basic service tier, as specified in the PowerShell script.

    It may take 10–15 minutes to import the SQL database.

  3. Open C:\Code\Lesson02\ExecuteQuery.bat in Notepad. It contains the following commands:

    ostress -Sazuresqlservername.database.windows.net -Uuser

    -Ppassword -dazuresqldatabase -Q"SELECT * FROM Warehouse. StockItems si join Warehouse.StockItemholdings sh on si.StockItemId=sh.StockItemID join Sales.OrderLines ol on ol.StockItemID = si.StockItemID" –n25 –r20 -1

  4. Replace azuresqlservername, user, password, and azuresqldatabase with the appropriate values. For example, if you are running the preceding command against Azure SQL Database with toystore hosted on the toyfactory Azure SQL server with the username sqladmin and the password Packt@pub2, then the command will be as follows:

    ostress -Stoyfactory.database.windows.net -Usqladmin -PPackt@ pub2 -dtoystore -Q"SELECT * FROM Warehouse.StockItems si join Warehouse.StockItemholdings sh on si.StockItemId=sh.StockItemID join Sales.OrderLines ol on ol.StockItemID = si.StockItemID" -n25

    -r20 -q

    The command will run 25 (specified by the -n25 parameter) concurrent sessions, and each session will execute the query (specified by the -Q parameter) 20 times.

  5. Open the RML command prompt, enter the following command, and press Enter:

    C:\Code\Lesson02\ExecuteQuery.bat

    This will run the ostress command. Wait for the command to finish execution. Record the execution time:

    Figure 2.14: RML command prompt

    It took around 1 minute and 52 seconds to run 25 concurrent connections against the Basic service tier.

  6. The next step is to scale up the service tier from Basic to Standard S3. In the PowerShell console, execute the following command:

    C:\Code\Lesson02\ScaleUpAzureSQLDB.ps1

    Provide the parameters as shown in the following screenshot:

    Figure 2.15: Scaling up the service tier

    Observe that the database Edition has been changed to Standard.

  7. Open a new RML command prompt and run the same ostress command as in step 4. You should see faster query execution time in the Standard S3 tier than in the Basic tier.

    Here's the output from the ExecuteQuery.bat command:

Figure 2.16: Output from the ExecuteQuery.bat command

It took around 42 seconds to run 25 concurrent connections against the Standard S3 service tier. This is almost 60% faster than the Basic tier. You get the performance improvement just by scaling up the service tier, without any query or database optimization.

Changing a Service Tier

You can scale up or scale down an Azure SQL Database at any point in time. This gives the flexibility to save money by scaling down to a lower service tier in off-peak hours and scaling up to a higher service tier for better performance in peak hours.

You can change a service tier either manually or automatically. Service tier change is performed by creating a replica of the original database at the new service tier performance level. The time taken to change service tier depends on the size as well as the service tier of the database before and after the change.

Once the replica is ready, the connections are switched over to the replica. This ensures that the original database is available for applications during the service tier change. This also causes all in-flight transactions to be rolled back during the brief period when the switch to the replica is made. The average switchover time is four seconds, and it may increase if there are a large number of in-flight transactions.

You may have to add retry logic in the application to manage the connection disconnect issues when changing a service tier.