TraxLambdaFunction
Abstract base class for AWS Lambda functions that execute Trax trains via direct SDK invocation. Handles service provider lifecycle, envelope-based dispatching, cancellation, and error handling so your Lambda function is just a DI configuration.
Package
dotnet add package Trax.Runner.Lambda
Signature
public abstract class TraxLambdaFunction
{
protected abstract void ConfigureServices(IServiceCollection services, IConfiguration configuration);
protected virtual void ConfigureLogging(ILoggingBuilder logging);
public Task<object?> FunctionHandler(
LambdaEnvelope envelope,
ILambdaContext context
);
public Task RunLocalAsync(string[] args);
}
Overridable Members
| Member | Required | Description |
|---|---|---|
ConfigureServices(IServiceCollection, IConfiguration) | Yes | Register your Trax effects, mediator, data contexts, and application services. IConfiguration is loaded from appsettings.json (if present) and environment variables. Do not call AddTraxJobRunner() because the base class does this automatically. |
ConfigureLogging(ILoggingBuilder) | No | Customize logging. Default: console logging at Information level. |
Envelope Dispatching
The FunctionHandler entry point receives a LambdaEnvelope directly from the AWS SDK. No API Gateway or Function URL is involved. The envelope’s Type field determines the operation:
| Type | Handler | Description |
|---|---|---|
Execute | ITraxRequestHandler.ExecuteJobAsync | Fire-and-forget job execution (queue path). Returns RemoteJobResponse. |
Run | ITraxRequestHandler.RunTrainAsync | Synchronous execution with output (run path). Returns RemoteRunResponse. |
| unknown | (none) | Throws InvalidOperationException |
The LambdaEnvelope is a shared contract defined in Trax.Scheduler:
public record LambdaEnvelope(LambdaRequestType Type, string PayloadJson);
public enum LambdaRequestType { Execute, Run }
Examples
Minimal Runner
using Amazon.Lambda.Core;
using Amazon.Lambda.Serialization.SystemTextJson;
using Trax.Runner.Lambda;
[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]
public class Function : TraxLambdaFunction
{
protected override void ConfigureServices(IServiceCollection services, IConfiguration configuration)
{
var connString = configuration.GetConnectionString("TraxDatabase")!;
services.AddTrax(trax => trax
.AddEffects(effects => effects.UsePostgres(connString))
.AddMediator(typeof(MyTrain).Assembly));
}
}
With Custom Logging and Effects
[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]
public class Function : TraxLambdaFunction
{
protected override void ConfigureServices(IServiceCollection services, IConfiguration configuration)
{
var connString = configuration.GetConnectionString("TraxDatabase")!;
var rabbitMq = configuration.GetConnectionString("RabbitMQ")!;
services.AddMyDataContexts(connString);
services.AddTrax(trax => trax
.AddEffects(effects => effects
.UsePostgres(connString)
.SaveTrainParameters()
.AddJunctionProgress()
.UseBroadcaster(b => b.UseRabbitMq(rabbitMq)))
.AddMediator(
typeof(MyClientTrains.AssemblyMarker).Assembly,
typeof(MyAdminTrains.AssemblyMarker).Assembly));
}
protected override void ConfigureLogging(ILoggingBuilder logging)
{
logging.AddConsole().SetMinimumLevel(LogLevel.Debug);
}
}
Local Development
Use RunLocalAsync to run the Lambda function as a local Kestrel web server. This maps POST /trax/execute and POST /trax/run endpoints that wrap incoming HTTP request bodies into LambdaEnvelope payloads and execute them through the same handler logic as the Lambda entry point.
// Program.cs
await new Function().RunLocalAsync(args);
This enables a smooth development workflow:
- Local dev: Scheduler uses
UseRemoteWorkers()+UseRemoteRun()to hit the local Kestrel server - Production: Scheduler uses
UseLambdaWorkers()+UseLambdaRun()for direct SDK invocation
The local server reads its port from appsettings.json (via Kestrel configuration) and exposes the same endpoints that the Lambda would handle in production.
Configuration
The base class automatically builds an IConfiguration from:
appsettings.json(optional, loaded fromAppContext.BaseDirectory)- Environment variables
This means you can use appsettings.json for local development and environment variables in Lambda. Both work out of the box. The configuration is passed to ConfigureServices and registered in DI as IConfiguration.
Cold Start Optimization
The service provider is built lazily on the first invocation, not during Lambda container creation. Subsequent invocations within the same container reuse the same provider.
To minimize cold start time:
- Keep
ConfigureServiceslean. Only register what the runner needs. - Use
SkipMigrations(). Migrations should run from the API or CI, not the Lambda. - Avoid unnecessary effect providers. If the runner doesn’t need broadcasting, don’t register it.
How It Works
- Lambda runtime creates an instance of your
Functionclass - On the first
FunctionHandlerinvocation,BuildServiceProvider()is called:- Builds
IConfigurationfromappsettings.json+ environment variables - Creates a
ServiceCollection - Registers
IConfigurationas a singleton - Calls
ConfigureLogging()(virtual, overridable) - Calls
ConfigureServices()(your code) - Calls
AddTraxJobRunner()(automatic) - Builds and caches the
IServiceProvider
- Builds
- Each invocation creates a new DI scope and resolves
ITraxRequestHandler - Cancellation is derived from
ILambdaContext.RemainingTime - The
LambdaEnvelope.Typefield determines which handler method is called
Error Handling
For Execute requests, exceptions are logged and returned as a RemoteJobResponse with structured error fields (IsError, ErrorMessage, ExceptionType, StackTrace). Errors that occur within the train itself are also persisted to the Metadata table by ServiceTrain.Run. However, pre-train errors (e.g., deserialization failures) only appear in the log output. The LambdaJobSubmitter on the scheduler side does not read the response (fire-and-forget).
For Run requests, exceptions are logged before being rethrown. ITraxRequestHandler.RunTrainAsync returns a RemoteRunResponse that may contain structured error fields. The LambdaRunExecutor on the scheduler side reads the response and reconstructs a TrainException with the full error context.
See Also
- Remote Execution: architecture overview and deployment models
- UseLambdaWorkers: scheduler-side configuration for Lambda dispatch
- UseLambdaRun: scheduler-side configuration for Lambda run execution
- AddTraxJobRunner: what
AddTraxJobRunner()registers (called automatically by the base class)