Azure Elastic Jobs for SQL Databases

Share This Post

What’s an Elastic Job?

While the definition of an Elastic Job is quite straightforward in Microsoft’s documentation, I always like to read about a real-life example of the usage of a technology or tool first – I find it usually makes the “a-ha” moment arrive sooner. I came across Elastic Jobs initially when on my day-to-day, very much Azure-centric project, I had to schedule the execution of a stored procedure in a SQL Database, running on an Azure SQL Server, on a daily basis for maintenance purposes. Naturally, when I heard the expression “daily schedule”, I was already imagining a simple Logic App with two steps: a Recurrence trigger, and an Execute Stored Procedure step. It sounded easy enough.

However, on that particular day, I discovered the timeout limit of the SQL Connector, used by Logic Apps: only two minutes. If the execution time of the stored procedure goes above 120 seconds, it’s a 504 Timeout response. My particular stored procedure was performing well above the limit, taking on average four hours to execute. Reading up on potential workarounds, the following recommendations were suggested:

  • Just use database triggers – I’m generally in the camp of advising against using triggers, even for data integrity purposes. In my experience, eventually everyone forgets about triggers and re-discovering them is usually a long and painful path, paved with headaches. Besides, triggers weren’t going to work with my problem on hand about running maintenance queries at a set time, daily – I couldn’t tie the execution to database events, such as inserts, updates or deletes (DML triggers), neither to create/alter/drop statements (DDL triggers), and neither to session-start events (LOGON triggers).
  • Native SQL pass-through queries
  • State tables
  • Server-side jobs

After noting the workarounds, the documentation goes into detail about the purpose of Elastic Jobs – they provide a possibility to ‘asynchronously’ execute a stored procedure via a job agent. Jobs are periodically executed queries-, or maintenance tasks which can run over a collection of databases. Inputs and outputs can be stored in a state table, which is a notable feature, however, unnecessary for my use-case, as my task on hand was just to run a simple “exec stored_proc”, no parameters, no need to track outputs (other than watch for failed runs).

In the end, I decided to give the approach a go, even if only as a proof of concept, as the Elastic Job resource is still available only as a Preview feature on Azure.

  Comparing IaC tools for Azure

Setting up the Elastic Job

The first step in creating the Elastic Job, is to specify a database in which the resource will generate a schema of jobs and jobs_internal, as well as a few tables (eg. jobs, job_steps, job_executions, job_cancellations, etc), views and stored procedures (eg. sp_add_job, sp_add_jobstep, sp_start_job, etc). To keep things well separated from my project, I decided to create a new database, purely for the Elastic Job – as a side note, it had to be configured on at least the tier S0, therefore my monthly cost was ~$12. Other than the cost of the database, no additional charges were applied to my account:

gFOie4YBsXBzRFKP1PDiqwlsFxo3RFwXbU7o68Kb1zaIEBjNjKIO

The second step is actually already the review & summary finalization. After making sure everything is correct, the database is populated, however we are not quite done with our setup – a few statements will have to be executed to finish our configuration.

One of the more confusing parts in the documentation was to understand which statement has to be executed in which database. Below, I will try to simplify my experience with this, by the following indications:

  • Original server, database: The ‘target’ SQL Server, the database in it, where our original stored procedure is located.
  • Original server, master: The ‘target’ SQL Server, our original server in which resides a database, in which database we have the stored procedure which we want to execute – however, instead of connecting to that database within the server, this is the master database.
  • Elastic Job’s server, database: The newly created SQL Server and database, where we configured the Elastic Job agent just now via the Azure Portal.
  • Elastic Job’s server, master: The newly created SQL Server, where we configured the Elastic Job agent just now via the Azure Portal, however, connecting again to the master database, instead of the actual database.

We will need to create credentials, a login and a user for the job execution. Connecting to the newly created database for the Elastic Job, the following statements had to be executed, in a set order:

Elastic Job’s server, database

create master key encryption by password = 'StrongPassword123!'
go

create database scoped credential JobExecution with identity = 'UserForJob',
secret = 'StrongPassword123!'
go

It is noted that the same set of credential must be used in the target database, for a create-login and create-user-from-login, ensuring that we can actually connect to our original database where we will need to execute the stored procedure – within master of the original database, the following command has to be executed:

  Azure Durable Functions: Fan-Out/Fan-in Pattern

Original server, master

create login UserForJobLogin with password = 'StrongPassword123!'

We will create a user from this login both for master and for the actual database:

Original server, master

create user UserForJob for login UserForJobLogin with default_schema = dbo

Original server, database

exec sp_addrolemember 'db_owner', 'jobuser'

Switching back to the newly created database for the job agent, connecting to the actual database (instead of master), our next step will be to create a group in which we will store the databases where our original stored procedure will have to be executed.


A group can be created by simply running the following stored procedure:

Elastic Job’s server, database

exec jobs.sp_add_target_group 'MyTargetGroup'

Now, we need to add members to this group – in our case it will be one member, the SQL server, where we need to execute our original stored procedure in one of its databases:

Elastic Job’s server, database

exec jobs.sp_add_target_group_member
'MyTargetGroup',
@target_type = N'SqlServer',
@refresh_credential_name = 'JobExecution',
@server_name ='tcp:my-original-database.windows.net,1433'

If we indeed succeeded, by querying from tables jobs.target_groups and jobs.target_group_members, we should see the following data (of course, the GUIDs will differ):

target_group_nametarget_group_id
MyTargetGroup4a22f455-b809-45c6-9dcc-262e90efc58a
target_group_nameMyTargetGroup
target_group_id4a22f455-b809-45c6-9dcc-262e90efc58a
membership_typeInclude
target_typeSqlServer
target_ide6487db2-c279-4b13-9842-25e3d85fbac7
refresh_credential_nameJobExecution
subscription_idNULL
resource_group_nameNULL
server_nametcp:my-original-database.windows.net,1433
database_nameNULL
elastic_pool_nameNULL
shard_map_nameNULL

Now that we have a group to place our job into, we can go ahead and first create a job, and a simple step (which creates a table if it doesn’t exist in the target database, then inserts a single row into it):

Elastic Job’s server, database

exec jobs.sp_add_job
@job_name ='MyFirstJob',
@description ='For demo purposes'
go

exec jobs.sp_add_jobstep
@job_name = 'MyFirstJob',
@credential_name = 'JobExecution',
@target_group_name = 'MyTargetGroup',
@command = 'IF NOT EXISTS (SELECT name FROM sys.tables WHERE name =''SampleTable'')
BEGIN
CREATE TABLE ElasticJob
(
    [Id] INT IDENTITY,
    [Timestamp] DateTime
)
END
INSERT INTO ElasticJob ([Timestamp]) values (GETUTCDATE())'

Again, to make sure everything works as expected, we can query from jobs.jobs and jobs.jobsteps. If both tables are filled, we can try to trigger the job manually via:

Elastic Job’s server, database

exec jobs.sp_start_job 'MyFirstJob'

If, in the target database we don’t see the new table, or the inserted row, we can debug the issue via running the following query (paying very close attention to the column ‘last_message’):

  TDD: primitive obsession ( part 3 )

Elastic Job’s server, database

select * from jobs.job_executions order by create_time desc

With that, we have created the Elastic Job and we are ready to configure our Logic App, which will trigger the job on a schedule.

Create a Logic App

Provisioning a Logic App is a fairly straightforward process, and does not require any additional or specific need for Elastic Jobs. After the resource is created, we can set up the trigger – again, nothing unique in this step, although for my case, I configured a Recurrence trigger, setting it up for a nightly schedule, every day.

The next step, however, is where the fun begins: we’ll start with an Execute SQL Query V2, which executes the previously noted stored procedure of jobs.sp_start_job:

Elastic Job’s server, database

ygbuRBTK3bn kyr

If we were to capture the output/result of the execution of the previous step, creating a state table in the Original server, database, and then appending a For Each with Insert row V2 steps in the Logic App could store the execution details:

YhyOaT0XKxZ 6dqqf7nqDuj8fusZ5eLdSqBcnFiedsy6yhAR io7 amqyhRQK2rMBUsy1u9YAcm3eDv1np5

Summary

Of course, this was a very simple use case, only touching the tip of the iceberg of Elastic Job’s capabilities, but for my use case, it was a straightforward, elegant solution which took half a day to set up.

Naturally, when implementing the solution in a real-life scenario, some additional aspects has to be considered:

  • In the demo, the user we created in the SQL Server is a db_owner, on both the master, and our own database, which is granting a lot more authorization to the user than necessary.
  • We did not configure any alerts – should the Logic App fail its execution some day, it is a good idea to set up an email or text message alert. For this, a simple Alert Rule, watching for the Logic App’s failure is an easy and efficient way to go, as so far, the execution of the Elastic Job’s jobs are visible under the resource itself, but no alert can be tied to a “Status” metric (at least at this point of time, in March of 2022):

FSqnazSa6 1SryQdIwRD8Vz4 ba5DITuqC Wixsbwr7z0NyaFruyjzzhXwoXOxIqCUSE3fCeQX24TS hNgLvTLdvmedkhBw0JizkrHvbJQgge6AO e7KvuWi9W4zURdLuli7YeHZ

References

Author

  • Daniel Agg

    Full stack developer, with 6+ years of experience, building back-, and frontends with various technologies, designing and developing scalable infrastructures on Microsoft Azure.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Subscribe To Our Newsletter

Get updates from our latest tech findings

Have a challenging project?

We Can Work On It Together

apiumhub software development projects barcelona
Secured By miniOrange