Simon Online

2023-02-15

Excel and Ruby

Excel is the king of spreadsheets and I often find myself in situation where I have to write our Excel files in an application. I’d say that as an application grows the probability of needing Excel import or export approaches 1. Fortunately, there are lots of libraries out there to help with Excel across just about every language. The quality and usefuleness of these libraries varies a lot. In Ruby land there seem to be a few options.

Spreadsheet

https://github.com/zdavatz/spreadsheet/

As the name suggests this library deals with Excel spreadsheets. It is able to both read and write them by using Spreadsheet::Excel Library and the ParseExcel Library. However it only supports the older XLS file format. While this is still widely used it is not the default format for Excel 2007 and later. I try to stay clear of the format as much as possible. There have not been any releases of this library in about 18 months but there haven’t been any releases of the XLS file format for decades so it doesn’t seem like a big deal.

The library can be installed using

gem install spreadsheet

Then you can use it like so

require 'spreadsheet'

workbook = Spreadsheet.open("test.xls")
worksheet = workbook.worksheet 0
worksheet.rows[1][1] = "Hello there!"
workbook.write("test2.xls")

There are some limitations around editing files such as cell formats not updating but for most things it should be fine.

RubyXL

https://github.com/weshatheleopard/rubyXL

This library works on the more modern XLSX file formats. It is able to read and write files with modifications. However there are some limitations such as being unable to insert images

require 'rubyXL'

  # only do this if you don't care about memory usage, otherwise you can load submodules separately
  # depending on what you need
require 'rubyXL/convenience_methods'

workbook = RubyXL::Parser.parse("test.xlsx")
worksheet = workbook[0]
cell = worksheet.cell_at('A1')
cell.change_contents("Hello there!")
workbook.write("test2.xlsx")

CAXLSX

https://github.com/caxlsx/caxlsx

This library is the community supported version of AXLSX. It is able to generate XLSX files but not read them or modify them. There is rich support for charts, images and other more advanced excel features. The

Install using

gem install caxlsx

And then a simple example looks like

require 'axlsx'

p = Axlsx::Package.new
workbook = p.workbook

wb.add_worksheet(name: 'Test') do |sheet|
  sheet.add_row ['Hello there!']
end

p.serialize "test.xlsx"

Of all the libraries mentioned here the documentation for this one is the best. It is also the most actively maintained. The examples directory https://github.com/caxlsx/caxlsx/tree/master/examples gives a plethora of examples of how to use the library.

Fast Excel

https://github.com/Paxa/fast_excel

This library focuses on being the fastest excel library for ruby. It is actually written in C to speed it up so comes with all the caveats about running native code. Similar to CAXLSX it is only able to read and write files and not modify them.

require 'fast_excel'

  # constant_memory: true streams changes to disk so it means that you cannot
  # modify an already written record
workbook = FastExcel.open("test.xlsx", constant_memory: true)
worksheet = workbook.add_worksheet("Test")

bold = workbook.bold_format
worksheet.set_column(0, 0, FastExcel::DEF_COL_WIDTH, bold)
worksheet << ["Hello World"]
workbook.close

As you can see here the library really excels at adding consistently shaped rows. You’re unlikely to get a complex spreadsheet with headers and footers built using this tooling.

2022-11-17

Bulk Insert SQL Geometry on .NET Core

I have been updating an application from full framework to .NET 6 this week. One of the things this app does is bulk load data into SQL Server. Normally this works just fine but some of the data is geography data which requires a special package to be installed: Microsoft.SqlServer.Types. This package is owned by the SQL server team so, as you’d expect, it is ridiculously behind the times. Fortunately, they are working on updating it and it is now available for Netstandard 2.1 in a preview mode.

The steps I needed to take to update the app were:

  1. Install the preview package for Microsoft.SqlServer.Types
  2. Update the SQL client package from System.Data.SqlClient to Microsoft.Data.SqlClient

After that the tests we had for inserting polygons worked just great. This has been a bit of a challenge over the years but I’m delighted that we’re almost there. We just need a non-preview version of the types package and we should be good to go.

Gotchas

When I’d only done step 1 I ran into errors like

System.InvalidOperationException : The given value of type SqlGeometry from the data source cannot be converted to type udt of the specified target column.
---- System.ArgumentException : Specified type is not registered on the target server. Microsoft.SqlServer.Types.SqlGeometry, Microsoft.SqlServer.Types, Version=16.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91.

I went down a rabbit hole on that one before spotting a post from MVP Erik Jensen https://github.com/ErikEJ/EntityFramework6PowerTools/issues/103 which sent me in the right direction.

2022-11-11

Removing Azure Backups in Terraform

If you have a VM backup in your Terraform state and need to get rid of it be aware that it is probably going to break your deployment pipeline. The reason is that Terraform will delete the item but then find that the resource still there. This is because backup deletion takes a while (say 14 days). Eventually the backup will delete but not before Terraform times out.

The solution I’m using is to just go in an manually delete the backup from the terraform state to unblock my pipelines.

terraform state list | grep <name of your backup>
-- make note of the resource identifier --
terraform state rm <found resource identifier>

Editing Terraform state seems scary but it’s not too bad after you do it a bit. Take backups!

2022-11-01

Dealing with Set-Output Depreciation Warnings in Terraform github-actions

I’ve got a build that is running terraform on github actions (I actually have loads of them) and I’ve been noticing that they are very chatty about warnings now.

The warning is

The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

The history here without reading that link is basically that github are changing how we push variables to the pipeline for use in later steps. There were some security implications with the old approach and the new approach should be better

- name: Save variable
  run: echo "SOMENAME=PICKLE" >> $GITHUB_STATE

- name: Set output
  run: echo "SOMENAME=PICKLE" >> $GITHUB_OUTPUT

Problem was that the steps on which I was having trouble didn’t obviously use the set-output command.

 ...
- name: Init Terraform
  run: terraform init 
- name: Validate Terraform
  run: terraform validate
...        

I had to dig a bit to find out that it was actually the terraform command that was causing the problem. See as part of the build I install the terraform cli using the

  - name: HashiCorp - Setup Terraform
    uses: hashicorp/setup-terraform@v2.0.2
    with:
        terraform_version: 1.1.9
        terraform_wrapper: true

Turns out that as of writing the latest version of the wrapper installed by the setup-terraform task makes use of an older version of the @actions/core package. This package is what is used to set the output and before version 1.10 it did so using set-output. A fix has been merged into the setup-terraform project but no update released yet.

For now I found that I had no need for the wrapper so I disabled it with

  - name: HashiCorp - Setup Terraform
    uses: hashicorp/setup-terraform@v2.0.2
    with:
        terraform_version: 1.1.9
        terraform_wrapper: false

but for future readers if there is a more recent version of setup-terraform than 2.0.2 then you can update to that to remove the warnings. Now my build is clean

2022-11-01

My Theory of GitHub Actions and IaC

I do all sorts of random work and one of those is helping out on some infrastructure deployments on Azure. Coming from a development background I’m allergic to clicking around inside an Azure website to configure things in a totally non-repeatable way. So I’ve been using Terraform to do the deployments. We have built up a pretty good history of using Terraform - today I might use Pulumi instead but the actual tool isn’t all that important as opposed to the theory.

What I’m looking to achieve is a number of things

  1. Make deployments easy for people to do
  2. Make deployments repeatable - we should be able to us the same deployment scripts to set up a dev enviornment or recover from a disaster with minimal effort
  3. Ensure that changes are reviewed before they are applied

To meet these requirements a build pipeline in GitHub actions (or Azure DevOps, for that matter) is an ideal fit. We maintain our Terraform scripts in a repository. Typically we use one repository per resource group but your needs may vary around that. There isn’t any monetary cost to having multiple repositories but there can be some cognitive load to remembering where the right repository is (more on that later).

Source Code

Changes to the infrastructure definition code are checked into a shared repository. Membership over this code is fairly relaxed. Developers and ops people can all make changes to the code. We strive to make use of normal code review approaches when checking in changes. We’re not super rigorous about changes which are checked in because many of the people checking in changes have ops backgrounds and aren’t all that well versed in the PR process. I want to make this as easy for them as possible so they aren’t tempted to try to make changes directly in Azure.

In my experience there is a very strong temptation for people to abandon rigour when a change is needed at once to address a business need. We need to change a firewall rule - no time to review that let’s just do it. I’m not saying that this is a good thing but it is a reality. Driving people to the Terraform needs to be easy. Having their ad-hoc changes overwritten by a Terraform deploy will also help drive the point home. Stick and carrot.

Builds

A typical build pipeline for us will include 3 stages.

The build step runs on a checkin trigger. This will run an initial build step which validates the terraform scripts are syntactically correct and well linted. A small number of our builds stop here. Unlike application deployments we typically want these changes to be live right away or at most during some maintenance window shortly after the changes have been authored. That deployments are run close to the time the changes were authored helps with our lack of rigour around code reviews.

The next stage is to preview what changes will be performed by Terraform. This stage is gated such that it need somebody to actually approve it. It is low risk because no changes to made - we run a terraform plan and see what changes will be made. Reading over these changes is very helpful because we often catch unintended consequences here. Accidentally destroying and recreating a VM instead of renaming it? Caught here. Removing a tag that somebody manually applied to a resource and that should be preserved? Caught here.

The final stage in the pipeline is to run the Terraform changes. This step is also gated to prevent us from deploying it without proper approvals. Depending on the environment we might need 2 approvals or at least one approval that isn’t the person writing the change. More eyes on a change will catch problems more easily and also socialize changes so that it isn’t a huge shock to the entire ops team that we now have a MySQL server in the enviornment or whatever it may be.

Tags

Most Azure resources support tagging. These are basically just labels that you can apply to resources. We use tags to help us organize our resources. We have a tag called environment which is used to indicate what environment the resource is in. We have a tag called owner which is used to indicate who owns the resource. We have a tag called project which is used to indicate what project the resource is associated with. But for these builds the most important tags are IaC Technology and IaC Source. The first is used to tell people that the resources are part of a Terraform deployment. The second is used to indicate where on GitHub the Terraform scripts are located. These tags make it really easy for people to find the Terraform scripts for a resource and get a change in place.

Permissions

I mentioned earlier that we like to guide ops people to make enviornment changes in Terraform rather than directly in Azure. One of the approaches we take around that is to not grant owner or writer permissions to resources directly to people be they ops or dev. Instead we have a number of permission restricted service principals that are used to make changes to resources. These service principals are granted permissions to specific resource groups and these service principals are what’s used in the pipeline to make the changes. This means that if somebody wants to make a change to a resource they need to go through the Terraform pipeline.

We keep the client id and secret in the secrets of the github pipeline

In this example we just keep a single repository wide key because we only have one enviornment. We’d make use of enviornment specific secrets if we had more than one environment.

This approach has the added bonus of providing rip stops in the event that we leak some keys somewhere. At worst that service principal has access only to one resource group so an attacker is limited to being able to mess with that group and not escape to the larger enviornment.

Achieving our Goals

To my mind this approach is exactly how IaC was meant to be used. We have a single source of truth for our infrastructure. We can make changes to that infrastructure in a repeatable way. We can review those changes before they are applied. All this while keeping the barrier to entry low for people who are not familiar with the code review process.

Future Steps

We already make use of Terraform modules for most of our deployment but we’re not doing a great job of reusing these modules from project to project. We’re hoping to keep a library of these modules around which can help up standardize things. For instance our VM module doesn’t just provision a VM - it sets up backups and uses a standardized source image.

I also really like the idea of using the build pipeline to annotate pull requests with the Terraform changes using https://github.com/marketplace/actions/terraform-pr-commenter. Surfacing this directly on the PR would save the reviewers the trouble of going through the pipeline to see what changes are being made. However it would be added friction for our ops team as they’d have to set up PRs.

2022-08-12

Registering Terraform Providers

If you’re setting up a new Terraform project on Azure you might find yourself needing to register providers if you’re running with an identity that doesn’t have wide ranging access to the subscription. I ran into this today with the error


│ Error: Error ensuring Resource Providers are registered.
│ 
│ Terraform automatically attempts to register the Resource Providers it supports to
│ ensure it's able to provision resources.
│ 
│ If you don't have permission to register Resource Providers you may wish to use the
│ "skip_provider_registration" flag in the Provider block to disable this functionality.
│ 
│ Please note that if you opt out of Resource Provider Registration and Terraform tries
│ to provision a resource from a Resource Provider which is unregistered, then the errors
│ may appear misleading - for example:
│ 
│ > API version 2019-XX-XX was not found for Microsoft.Foo
│ 
│ Could indicate either that the Resource Provider "Microsoft.Foo" requires registration,
│ but this could also indicate that this Azure Region doesn't support this API version.
│ 
│ More information on the "skip_provider_registration" flag can be found here:
│ https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#skip_provider_registration
│ 
│ Original Error: Cannnot register providers: Microsoft.StoragePool. Errors were: Cannot register provider Microsoft.StoragePool with Azure Resource Manager: resources.ProvidersClient#Register: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '************' with object id ''************'' does not have authorization to perform action 'Microsoft.StoragePool/register/action' over scope '/subscriptions/***' or the scope is invalid. If access was recently granted, please refresh your credentials.".
│ 
│   with provider["registry.terraform.io/hashicorp/azurerm"],
│   on environment.tf line 21, in provider "azurerm":
│   21: provider "azurerm" {

The account running terraform in my github actions pipeline is restricted to only have contributor over the resource group into which I’m deploying so it’s unable to properly set up providers. Two things needed to fix it:

  1. Tell terraform to not try to register providers
  2. Register the providers manually

    For 1 the provider block in the terraform file needs to be updated to look like

  provider "azurerm" {
    features {
    }
    skip_provider_registration = true
}

For 2 it requires logging into the azure portal and registering the providers manually. Go to the subscription and select Resource Providers then search for the one you need, select it and hit Register. In my case the provider was already registered and the problem was just Terraform’s attempt to register it without sufficient permission.

`

2022-08-04

Debugging Azure Container Instance Startup

I have some container instances which are failing to start up properly and the logs in the portal are blank. This makes debugging them kind of difficult.

On the command line running

az container logs -g <resource group name> -n <container group name> --container <container name>

Just gave me an output of None. Not useful either.

Fortunately, you can attach directly to the logs streams coming out of the container which will give you a better idea of what is going on.

az container attach -g <resource group name> -n <container group name> --container <container name>

This was able to give me output like

Start streaming logs:
/usr/local/lib/python3.9/site-packages/environ/environ.py:628: UserWarning: /ric-api/core/.env doesn't exist - if you're not configuring your environment separately, create one.
  warnings.warn(
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/environ/environ.py", line 273, in get_value
    value = self.ENVIRON[var]
  File "/usr/local/lib/python3.9/os.py", line 679, in __getitem__
    raise KeyError(key) from None
KeyError: 'DB_PORT'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ric-api/manage.py", line 22, in <module>
    main()
  File "/ric-api/manage.py", line 18, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 61, in execute
    super().execute(*args, **options)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 68, in handle
    if not settings.DEBUG and not settings.ALLOWED_HOSTS:
  File "/usr/local/lib/python3.9/site-packages/django/conf/__init__.py", line 82, in __getattr__
    self._setup(name)
  File "/usr/local/lib/python3.9/site-packages/django/conf/__init__.py", line 69, in _setup
    self._wrapped = Settings(settings_module)
  File "/usr/local/lib/python3.9/site-packages/django/conf/__init__.py", line 170, in __init__
    mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/ric-api/core/settings.py", line 114, in <module>
    'PORT': env('DB_PORT'),
  File "/usr/local/lib/python3.9/site-packages/environ/environ.py", line 123, in __call__
    return self.get_value(var, cast=cast, default=default, parse_default=parse_default)
  File "/usr/local/lib/python3.9/site-packages/environ/environ.py", line 277, in get_value
    raise ImproperlyConfigured(error_msg)
django.core.exceptions.ImproperlyConfigured: Set the DB_PORT environment variable
2022-07-14T14:37:17.6003172Z stderr F

Exception in thread Thread-1:
Traceback (most recent call last):
  File "threading.py", line 932, in _bootstrap_inner
  File "threading.py", line 870, in run
  File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/container/custom.py", line 837, in _stream_container_events_and_logs
  File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/container/custom.py", line 791, in _stream_logs
AttributeError: 'NoneType' object has no attribute 'split'

Looks like I missed adding a DB_PORT to the environmental variables

2022-05-30

Consuming SOAP Services in .NET Core

Today I ran into the need to consume an older SOAP web service in .NET Core. I was really fortunate in my timing because Core WCF was just released and it makes the whole process much easier.

Taking a step back for you youngsters out there SOAP was the service communication technology that existed before REST showed up with its JSON and ate everybody’s lunch. SOAP is really just the name for the transport mechanism but I think most of us would refer to the whole method of invoking remote procedures over the web as SOAP Web Services. SOAP, or Simple Object Access Protocol, is an XML-based standard for serializing objects from various different languages in a way that Java could talk to .NET could talk to Python. Unlike JSON it was a pretty well thought out protocol and had standard representations of things like dates which JSON just kind of ignores.

Web services were closer to a remote method invocation in that you would call something like GetUserId rather than the RESTful approach of hitting an endpoint like /Users/7 to get a user with Id 7. The endpoints which were provided by a Web Service were usually written down in a big long XML document called a WSDL which stands for Web Service Definition Language.

Web services gained a reputation for being very enterprisy and complex. There were a large number of additional standards defined around it which are commonly known as ws-*. These include such things as WS-Discovery, WS-Security, WS-Policy and, my personal favorite, the memorably named Web Single Sign-On Metadata Exchange Protocol.

Core WCF

In the last month we’ve seen the 1.0 release of Core WCF which I’m pretty certain I mocked at being a silly thing in which to invest resources. Tables have turned now I’m the one who needs it so thank to Scott Hunter or whoever it was that allocated resources to developing this.

To get started I needed to find the WSDLs for the services I wanted. This required a call to the support department of the company providing the services. The had a .NET library they pointed me to but it was compiled against .NET 4.5 so I wanted to refresh it. Fortunately the Core WCF release includes an updated svcutil. This tool will read a WSDL and generate service stubs in .NET for you.

I started with a new console project

dotnet new console

Then installed the dotnet-svcutil tool globally (you only need to do this once) and generated a service reference

dotnet tool install --global dotnet-svcutil
dotnet-svcutil --roll-forward LatestMajor https://energydataservices.ihsenergy.com/services/v2/searchservice.svc

This updated my project’s csproj file to include a whole whack of new library references

<?xml version="1.0" encoding="utf-8"?>
<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net6.0</TargetFramework>
    <RootNamespace>wsdl_test</RootNamespace>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
  </PropertyGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="dotnet-svcutil" Version="1.0.*" />
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="CoreWCF.Http" Version="1.0.0" />
    <PackageReference Include="CoreWCF.Primitives" Version="1.0.0" />
    <Reference Include="System.ServiceModel">
      <HintPath>System.ServiceModel</HintPath>
    </Reference>
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="System.ServiceModel.Duplex" Version="4.8.*" />
    <PackageReference Include="System.ServiceModel.Http" Version="4.8.*" />
    <PackageReference Include="System.ServiceModel.NetTcp" Version="4.8.*" />
    <PackageReference Include="System.ServiceModel.Security" Version="4.8.*" />
  </ItemGroup>
</Project>

It also generated a 13 000 line long service reference file in the project. Wowzers. I’m glad I don’t have to write that fellow myself.

With that all generated I’m now able to call methods in that service by just doing

using ServiceReference;
var client = new SearchServiceClient();
var result = await client.SomeMethodAsync();

This example really only scratches the surface of what the new Core WCF brings to .NET Core. I certainly wouldn’t want to develop new WCF services but for consuming existing ones or even updating existing ones then this library is going to be a great boost to productivity.

2022-04-07

Azure Functions Provider Error

I started up a previously working Azure functions project today that I hadn’t touched in a week. It failed to start with an error like this

A host error has occurred during startup operation 'b59ba8b8-f264-4274-a9eb-e17ba0e02ed8'.
api: Could not load file or assembly 'Microsoft.Extensions.Options, Version=6.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The system cannot find the file specified.
Value cannot be null. (Parameter 'provider')

This is the sort of error that terrifies me. Something is wrong but who knows what. No changes in git and an infinity of generic errors on google for Could not load file or assembly. Eventually after some digging it seems like I might be suffering from some corrupted tooling (some hints about that here: https://github.com/Azure/azure-functions-core-tools/issues/2232). I was able to fix mine by downloading the latest version of the tooling from https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=v4%2Cwindows%2Ccsharp%2Cportal%2Cbash

2022-04-07

Which SQL Hosting Option is Right for Me?

There are a bunch of different ways to host SQL Server workloads on Azure. Answering some questions about how you use SQL server can help guide us to picking the right option for you.

The 3 options for hosting we’re considering are

  1. SQL Azure - https://azure.microsoft.com/en-us/products/azure-sql/database/#overview
  2. Azure SQL Managed Instance - https://azure.microsoft.com/en-us/products/azure-sql/managed-instance/
  3. SQL Server on VM - https://azure.microsoft.com/en-us/services/virtual-machines/sql-server/#overview

I’ve listed these in my order of preference. I’d rather push people to a more managed solution than a less managed one. There is a huge shortage of SQL server skills out there so if you can take a more managed approach then you’re less likely to run into problems that require you finding an SQL expert. I frequently say to companies that they’re not in the business of managing SQL server but in the business of building whatever widgets they build. Unless there is a real need don’t waste company resources building custom solutions when you can buy a 90% solution off the shelf.

When I talk with companies about migrating their existing workloads to the cloud from on premise SQL servers I find myself asking these questions:

  1. Does your solution use cross database joins?
  2. Does your solution make use of the SQL Agent to run jobs?
  3. Does your solution use FILESTREAM to access files on disk?
  4. Does your solution require fine tuning of availability groups?
  5. Does your solution require SQL logins from CERTIFICATE, ASYMMETRIC KEY or SID?
  6. Do you need to make use of a compatibility level below 100?
  7. Do you need to make use of database mirroring?
  8. Does your solution need to start and stop job scheduling?
  9. Are you making use of SQL Server Reporting Services (SSRS)?
  10. Are you using xp_cmdshell anywhere in your application (https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql?view=sql-server-ver15)

If the answer to any of the first 3 questions is yes then they can’t easily use SQL Azure* and should set the baseline to a managed instance. If the answer to any of the rest of the questions is yes then they should set the baseline to a VM running a full on version of SQL Server. Only if the answer to all these questions is no is SQL Azure the best solution.

  • Cross database joins and SQL Agent can be replaced by Elastic Query and Elastic Jobs but neither one is an easy drop in replacement so I typically don’t bother directing people to them for time constrained migrations.