Implementing ‘Usage Reports’ with Serverless Architecture

Any SaaS tool is only as valuable as your understanding of how your team is leveraging it. 

At Signeasy, we are always trying to enhance the admin experience by providing them with data and insights about how their team is using our eSignature solution. This drive to constantly improve product experience was the intention behind our latest release, “Usage Reports”—to help teams understand their usage patterns and how effective Signeasy is for them.

You know what they say, the best way to build a product is to listen closely to your customers. Before this release, we had various customers request reports that our support team shared with them as per their needs. Thus began our journey to developing the new Reports module as part of our core product offering. The goal was to help our customers (Team Admins) access usage metrics at a transactional level.

The challenges that come with scale

Once we had the goal laid down, we started with brainstorming sessions. When it comes to architecting, building, and delivering the functionality to our users, the conversations behind the scenes are always thrilling.

Here was our biggest roadblock—As our traffic volumes and the number of transactions across the SaaS platform continue to grow, the challenges of scaling our managed services hosted on AWS have also increased. 

During the architectural discussions for Usage Reports, we realized that as more and more customers start accessing the reports, the load on our backend servers, i.e. Compute stack and Data store stack, will also increase, and it might become difficult to scale. 

Another critical factor that needed to be considered was how fast we could build and deliver this with minimal DevOps effort. 

Getting to the right solution 

With the challenges clear in our heads, we began to look for the perfect solution to overcome them. We explored several out-of-the-box options provided by Amazon that promised no or minimal management. 

The choice was obvious: AWS serverless services available for all three layers of the technology stack, i.e. Compute, Integration and Data Store, on which we could build our entire serverless architecture for usage reports.

Compute: Under the compute layer, we used AWS Lambda, an event-driven serverless compute service that lets us run code without provisioning or managing servers. We use Lambda for our business logic to process usage metrics and fetch them from the data store.

Application Integration: Under this serverless category, we used Amazon API Gateway and Amazon SQS. API Gateway works as a front door for our backend service (Lambda). SQS is used as a messaging queue to decouple and scale existing microservices, including our new serverless stack. Signature transaction data must be transferred to a scalable analytics store in an async manner using SQS.

Data Store: This is the core of our serverless offering for usage reports. We adopted the Amazon Timestream database, i.e. a fast, scalable and serverless time series database service for our analytical data needs. With Timestream, we can store and analyze millions of events/transactions per day at a much faster rate and one-tenth the cost of a relational database. Here, we store aggregated data about customer signature transactions.

With this full serverless stack of compute, application integration and data store, we could minimize developer involvement to a great extent and also reduce effort at the DevOps end. 

The technology helped us eliminate infrastructure management tasks like capacity provisioning and patching. Resource utilization is automatically optimized, and we don’t have to pay for over-provisioning. We pay only for the compute time we use per millisecond. 

The entire stack is auto-scalable to meet peak demands and can scale down automatically when traffic reduces. For instance, AWS Lambda automatically responds to requests at any scale, from a few events per day to hundreds of thousands per second.

Overall, the performance and results so far have been phenomenal with the stack! If you wish to know more about this implementation or have feedback on how we can improve our stack, please write to us at We would love to hear from you! 

Recommended Reads