Random Inno

Azure Data warehouse Performance Tuning 101

When it comes to performance tuning or troubleshooting, for a standard SQL server many tools and techniques exist. For example query execution plans can be analysed, profiler exists, and a host of other dmvs which a DBA cannot live without.

In an Azure Data warehouse there are similar dmvs. However some basic principles can be applied to guarantee optimal performance:

  1. Create statistics on every column
  2. Use indexes on join predicates. Non clustered indexes for small tables or tables with  a column store index or instead of a clustered index
  3. Use a consistent distribution key for all dimension and fact tables
  4. Use column store indexes for extra large tables
  5. Use clustered indexes for large tables
  6. Rebuild indexes regularly as part of a inter-day or weekly maintenance process
  7. Update statistics regularly as part of a intra-day or inter-day maintenance process
  8. Limit concurrency utilisation using resource classes
  9. Choose the correct distribution type to avoid expensive DMS operations
Random Inno

Azure MSI: Connect Using PowerShell Or .NET?

Managed Service Identity (MSI) was introduced last year. Since then quite a few articles have been written about it:

  1. Use a Windows VM Managed Service Identity (MSI) to access Azure Key Vault:
  2. Azure SQL authentication with a Managed Service Identity

MSI gives your code an automatically managed identity for authenticating to Azure services, so that you can keep credentials out of your code.

To enable MSI, for most services it can be done using PowerShell / ARM templates / Portal.

Enabling and configuring MSI is usually performed in 3 steps

  1. Enable MSI for the source resource
  2. Grant the application spn access to another target resource
  3. Add MSI authentication to the code hosted on the source resource

Example using Azure functions can be found in github.

I was surprised though to find out that connecting to Azure SQL using PowerShell with MSI does not work when hosted in a function app.

Also included in the visual studio solution is the function app running in PowerShell. It fails to log into Azure SQL server with the following error:

Exception while executing function: Functions.dm_pdw_exec_sessions. Microsoft.Azure.WebJobs.Script: PowerShell script error. System.Management.Automation: Exception calling "Open" with "0" argument(s): "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.". .Net SqlClient Data Provider: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON

At this point I suspect impersonation is working correctly with IIS hosting the function app

Azure, data warehouse, Microsoft, Random Inno

Azure Parallel Data Warehouse (PDW) Errors #1: cancelled transactions

Microsoft’s Azure data warehouse is a cloud hosted PaaS offering. Which implies compute and storage resources are managed by Microsoft / Azure fabric.

Scheduled and unscheduled maintenance activities occur which are supposed to be transparent, i.e. should not impact the platform. This is a false assumption.

On some occasions maintenance will impact the platform. For example back-end “node movements”, rotating compute / storage resources will cause errors on one of the compute / data nodes to bubble up into a transaction causing the transaction to enter a cancelled state.

Protect your transactions

The best way to protect a transaction when dealing with Azure DW, is to integrate retry logic into code.

$retry = 5
while ($retry -gt 0)

{
    try
    {
        # Connect here and perform SQL / ETL operation ...

        # Finally end the loop
        $retry = 0
    }
    catch
    {
        $retry =- 1
        if ($retry -gt 0)
        {
            Start-Sleep -Seconds 30
        }
    }
}
Random Inno

Forecasting Database Growth

Been thinking about the most accurate method for forecasting. More closely to my line of work, I’d like to forecast database growth over a specified period of time into the future?

The common method I have seen used is to simple use excel and plot a trend line from previously collected data, while this can be quite use-able in most scenarios, I don’t believe this captures everything such as seasonality. Seasonality here refers to the period data growth / purging that occurs. Most ignore this or are simply not aware but most databases will have some seasonal growth which needs to be added to the forecasting model.

The model used will be a 2 factor model of the form:

y = D + X + K

Where, y = the forecast database size, D = the seasonality variable, X = the stochastic variable with trend, and finally K is a constant of some sort.

Before we decide to identify seasonality and subsequently forecast, we need to gather some data (time series) In my case I’m going to be collecting hourly database size. Hourly simply because I want to retain a level of precision.

Lets use a simple Perl script (for portability, honestly anything that can remote exec a SQL query) which will connect to the database and execute some SQL.

See the code for data gathering in part 2.

Nerd Stuff, on.inno, Random Inno

Don’t want to learn Perl, Python, Shell … Ok, try On.Inno

There are people on this planet earth, wonderful as it is, that don’t want to learn about the glue that holds all things together. And by glue I’m referring to the workhorse scripts that never really get the attention they deserve:

  • Perl
  • Python
  • Shell

One cannot truly say they have experienced a production infrastructure with at least typing one command from any of the above.

Well for those who are not ready for this level can jump to the next level. I have something for you called on.inno.

on.inno is a high level scripting programming language that has 2 logical operators (IF …ELSE) … that’s it.

It’s simple, more to come.

Example:

EXECIFSTART<,>hostname<,>piroserv2<,>1

EXEC<,><!>This is a comment: embed any of thing here Perl| Pyton|Shell><,><,>1

EXECEND<,>echo if<,>if

Why did I create this simple language … I work in an environment that has different versions of OS’s (Linux, Windows, and Solaris), and not all of them have the same components. To save my self from having to try and say install Perl on all of them and simply drop some code there (ideal world) I simply created my own interpreted script to interface with the remote OS.

Random Inno

use Parallel::ForkManager

A very powerful and easy to use Perl multitasking module.

Used in every single code I now write, as it’s annoying to continuously  update code just to make it scale-able. In my scripts I usually like to “fork” things off especially say when manipulating data.

With this module it’s as easy as a foreach loop:

use Parallel::ForkManager;

use PiroLabs::Utils::DataCruncher;

my $max_threads = 10;

my @tasks = qw (x / + -);

$pm = new Parallel::ForkManager($max_threads);

foreach (@tasks){

$pm = new Parallel::ForkManager($max_threads);

my $pid = $pm->start and next;

my $dc = new DataCruncher([0 … 2000000], $_);

$dc->Execute();

$pm->finish;

}

$pm->wait_all_children;

In the snippet above I want to perform multiplication, division, addition and subtraction all at the same time on some defined data set.

Next post I will show how to pass data from the child processes back to the parent.