This Serverless plugin emulates AWS λ and API Gateway on your local machine to speed up your development cycles. To do so, it starts an HTTP server that handles the request's lifecycle like APIG does and invokes your handlers.
Features:
- Node.js, Python and Ruby λ runtimes.
- Velocity templates support.
- Lazy loading of your files with require cache invalidation: no need for a reloading tool like Nodemon.
- And more: integrations, authorizers, proxies, timeouts, responseParameters, HTTPS, CORS, etc...
This plugin is updated by its users, I just do maintenance and ensure that PRs are relevant to the community. In other words, if you find a bug or want a new feature, please help us by becoming one of the contributors ✌️ ! See the contributing section.
- Installation
- Usage and command line options
- Token authorizers
- Custom authorizers
- Remote authorizers
- Custom headers
- Environment variables
- AWS API Gateway features
- Usage with Webpack
- Velocity nuances
- Debug process
- Scoped execution
- Simulation quality
- Credits and inspiration
- Contributing
- License
First, add Serverless Offline to your project:
npm install serverless-offline --save-dev
Then inside your project's serverless.yml
file add following entry to the plugins section: serverless-offline
. If there is no plugin section you will need to add it to the file.
It should look something like this:
plugins:
- serverless-offline
You can check wether you have successfully installed the plugin by running the serverless command line:
serverless
the console should display Offline as one of the plugins now available in your Serverless project.
In your project root run:
serverless offline
or sls offline
.
to list all the options for the plugin run:
sls offline --help
All CLI options are optional:
--prefix -p Adds a prefix to every path, to send your requests to http://localhost:3000/[prefix]/[your_path] instead. E.g. -p dev
--location -l The root location of the handlers' files. Defaults to the current directory
--host -o Host name to listen on. Default: localhost
--port -P Port to listen on. Default: 3000
--stage -s The stage used to populate your templates. Default: the first stage found in your project.
--region -r The region used to populate your templates. Default: the first region for the first stage found.
--noTimeout -t Disables the timeout feature.
--binPath -b Path to the Serverless binary. Default: globally-installed `sls`
--noEnvironment Turns off loading of your environment variables from serverless.yml. Allows the usage of tools such as PM2 or docker-compose
--resourceRoutes Turns on loading of your HTTP proxy settings from serverless.yml
--printOutput Turns on logging of your lambda outputs in the terminal.
--httpsProtocol -H To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files
--skipCacheInvalidation -c Tells the plugin to skip require cache invalidation. A script reloading tool like Nodemon might then be needed
--cacheInvalidationRegex Provide the plugin with a regexp to use for ignoring cache invalidation. Default: 'node_modules'
--useSeparateProcesses Run handlers in separate Node processes
--corsAllowOrigin Used as default Access-Control-Allow-Origin header value for responses. Delimit multiple values with commas. Default: '*'
--corsAllowHeaders Used as default Access-Control-Allow-Headers header value for responses. Delimit multiple values with commas. Default: 'accept,content-type,x-api-key'
--corsExposedHeaders Used as additional Access-Control-Exposed-Headers header value for responses. Delimit multiple values with commas. Default: 'WWW-Authenticate,Server-Authorization'
--corsDisallowCredentials When provided, the default Access-Control-Allow-Credentials header value will be passed as 'false'. Default: true
--exec "<script>" When provided, a shell script is executed when the server starts up, and the server will shut down after handling this command
--apiKey Defines the API key value to be used for endpoints marked as private Defaults to a random hash.
--noAuth Turns off all authorizers
--preserveTrailingSlash Used to keep trailing slashes on the request path
--disableCookieValidation Used to disable cookie-validation on hapi.js-server
--enforceSecureCookies Enforce secure cookies
--providedRuntime Sets the runtime for "provided" lambda runtimes
--disableModelValidation Disables the model validation
--showDuration Show the execution time duration of the lambda function.
Any of the CLI options can be added to your serverless.yml
. For example:
custom:
serverless-offline:
httpsProtocol: "dev-certs"
port: 4000
Options passed on the command line override YAML options.
By default you can send your requests to http://localhost:3000/
. Please note that:
- You'll need to restart the plugin if you modify your
serverless.yml
or any of the default velocity template files. - The event object passed to your λs has one extra key:
{ isOffline: true }
. Also,process.env.IS_OFFLINE
istrue
. - When no Content-Type header is set on a request, API Gateway defaults to
application/json
, and so does the plugin. But if you send anapplication/x-www-form-urlencoded
or amultipart/form-data
body with anapplication/json
(or no) Content-Type, API Gateway won't parse your data (you'll get the ugly raw as input), whereas the plugin will answer 400 (malformed JSON). Please consider explicitly setting your requests' Content-Type and using separate templates.
As defined in the Serverless Documentation you can use API Keys as a simple authentication method.
Serverless-offline will emulate the behaviour of APIG and create a random token that's printed on the screen. With this token you can access your private methods adding x-api-key: generatedToken
to your request header. All api keys will share the same token. To specify a custom token use the --apiKey
cli option.
Only custom authorizers are supported. Custom authorizers are executed before a Lambda function is executed and return an Error or a Policy document.
The Custom authorizer is passed an event
object as below:
{
"type": "TOKEN",
"authorizationToken": "<Incoming bearer token>",
"methodArn": "arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>"
}
The methodArn
does not include the Account id or API id.
The plugin only supports retrieving Tokens from headers. You can configure the header as below:
"authorizer": {
"type": "TOKEN",
"identitySource": "method.request.header.Authorization", // or method.request.header.SomeOtherHeader
"authorizerResultTtlInSeconds": "0"
}
You are able to mock the response from remote authorizers by setting the environmental variable AUTHORIZER
before running sls offline start
Example:
Unix:
export AUTHORIZER='{"principalId": "123"}'
Windows:
SET AUTHORIZER='{"principalId": "123"}'
You are able to use some custom headers in your request to gain more control over the requestContext object.
Header | Event key |
---|---|
cognito-identity-id | event.requestContext.identity.cognitoIdentityId |
cognito-authentication-provider | event.requestContext.identity.cognitoAuthenticationProvider |
By doing this you are now able to change those values using a custom header. This can help you with easier authentication or retrieving the userId from a cognitoAuthenticationProvider
value.
You are able to use environmnet variables to customize identity params in event context.
Environment Variable | Event key |
---|---|
SLS_COGNITO_IDENTITY_POOL_ID | event.requestContext.identity.cognitoIdentityPoolId |
SLS_ACCOUNT_ID | event.requestContext.identity.accountId |
SLS_COGNITO_IDENTITY_ID | event.requestContext.identity.cognitoIdentityId |
SLS_CALLER | event.requestContext.identity.caller |
SLS_API_KEY | event.requestContext.identity.apiKey |
SLS_COGNITO_AUTHENTICATION_TYPE | event.requestContext.identity.cognitoAuthenticationType |
SLS_COGNITO_AUTHENTICATION_PROVIDER | event.requestContext.identity.cognitoAuthenticationProvider |
You can use serverless-dotenv-plugin to load environment variables from your .env
file.
You can supply response and request templates for each function. This is optional. To do so you will have to place function specific template files in the same directory as your function file and add the .req.vm extension to the template filename.
For example,
if your function is in code-file: helloworld.js
,
your response template should be in file: helloworld.res.vm
and your request template in file helloworld.req.vm
.
If the endpoint config has CORS set to true, the plugin will use the CLI CORS options for the associated route. Otherwise, no CORS headers will be added.
Set greedy paths like /store/{proxy+}
that will intercept requests made to /store/list-products
, /store/add-product
, etc...
Works out of the box.
Works out of the box. See examples in the manual_test directory.
Serverless doc ~ AWS doc - AWS::ApiGateway::Method ~ AWS doc - AWS::ApiGateway::Resource
Example of enabling proxy:
custom:
serverless-offline:
resourceRoutes: true
or
YourCloudFormationMethodId:
Type: AWS::ApiGateway::Method
Properties:
......
Integration:
Type: HTTP_PROXY
Uri: 'https://s3-${self:custom.region}.amazonaws.com/${self:custom.yourBucketName}/{proxy}'
......
custom:
serverless-offline:
resourceRoutes:
YourCloudFormationMethodId:
Uri: 'http://localhost:3001/assets/{proxy}'
You can set your response's headers using ResponseParameters.
May not work properly. Please PR. (Difficulty: hard?)
Example response velocity template:
"responseParameters": {
"method.response.header.X-Powered-By": "Serverless", // a string
"method.response.header.Warning": "integration.response.body", // the whole response
"method.response.header.Location": "integration.response.body.some.key" // a pseudo JSON-path
},
You can enable request body validation against a request model for lambda-proxy or lambda integration types. Instructions are:
- Define a validator resource with
ValidateRequestBody
set to true - Link the validator to an http event via
reqValidatorName
- Define a model
- Link the model to the http event via
documentation.requestModels
In case of an invalid request body, the server will respond 400.
Example serverless.yml:
custom:
documentation:
models:
-
name: HelloModel
contentType: application/json
schema:
type: object
properties:
message:
type: string
minLength: 2
required:
- message
functions:
helloWorld:
handler: handler.helloWorld
events:
- http:
path: hello-world
method: post
cors: true
reqValidatorName: myValidator
documentation:
requestModels:
"application/json": HelloModel
resources:
Resources:
myValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
To disable the model validation you can use --disableModelValidation
.
Use serverless-webpack to compile and bundle your ES-next code
Consider this requestTemplate for a POST endpoint:
"application/json": {
"payload": "$input.json('$')",
"id_json": "$input.json('$.id')",
"id_path": "$input.path('$').id"
}
Now let's make a request with this body: { "id": 1 }
AWS parses the event as such:
{
"payload": {
"id": 1
},
"id_json": 1,
"id_path": "1" // Notice the string
}
Whereas Offline parses:
{
"payload": {
"id": 1
},
"id_json": 1,
"id_path": 1, // Notice the number
"isOffline": true
}
Accessing an attribute after using $input.path
will return a string on AWS (expect strings like "1"
or "true"
) but not with Offline (1
or true
).
You may find other differences.
Serverless offline plugin will respond to the overall framework settings and output additional information to the console in debug mode. In order to do this you will have to set the SLS_DEBUG
environmental variable. You can run the following in the command line to switch to debug mode execution.
Unix:
export SLS_DEBUG=*
Windows:
SET SLS_DEBUG=*
Interactive debugging is also possible for your project if you have installed the node-inspector module and chrome browser. You can then run the following command line inside your project's root.
Initial installation:
npm install -g node-inspector
For each debug run:
node-debug sls offline
The system will start in wait status. This will also automatically start the chrome browser and wait for you to set breakpoints for inspection. Set the breakpoints as needed and, then, click the play button for the debugging to continue.
Depending on the breakpoint, you may need to call the URL path for your function in seperate browser window for your serverless function to be run and made available for debugging.
Lambda functions assume an IAM role during execution: the framework creates this role and set all the permission provided in the iamRoleStatements
section of serverless.yml
.
However, serverless offline makes use of your local AWS profile credentials to run the lambda functions and that might result in a different set of permissions. By default, the aws-sdk would load credentials for you default AWS profile specified in your configuration file.
You can change this profile directly in the code or by setting proper environment variables. Setting the AWS_PROFILE
environment variable before calling serverless
offline to a different profile would effectively change the credentials, e.g.
AWS_PROFILE=<profile> serverless offline
Serverless offline plugin can invoke shell scripts when a simulated server has been started up for the purposes of integration testing. Downstream plugins may tie into the "before:offline:start:end" hook to release resources when the server is shutting down.
> sls offline start --exec "./startIntegrationTests.sh"
This plugin simulates API Gateway for many practical purposes, good enough for development - but is not a perfect simulator. Specifically, Lambda currently runs on Node v6.10.0 and v8.10.0 (AWS Docs), whereas Offline runs on your own runtime where no memory limits are enforced.
Run serverless offline start
. In comparison with serverless offline
, the start
command will fire an init
and a end
lifecycle hook which is needed for serverless-offline and serverless-dynamodb-local to switch off resources.
Add plugins to your serverless.yml
file:
plugins:
- serverless-webpack
- serverless-dynamodb-local
- serverless-offline #serverless-offline needs to be last in the list
This plugin was initially a fork of Nopik's Serverless-serve.
Yes, thank you! This plugin is community-driven, most of its features are from different authors. Please update the docs and tests and add your name to the package.json file. We try to follow Airbnb's JavaScript Style Guide.
MIT