Forum Posts

Apr 25, 2019
In Cloud
The basics of adding realtime data push to your serverless backend. Serverless Serverless is one of the developer world’s most popular misnomers. Contrary to its name, serverless computing does in fact use servers, but the benefit is that you can worry less about maintenance, scale, and configuration. This is because serverless is a cloud computing execution model where a cloud provider dynamically manages the allocation of machine and computational resources. You are basically deploying code to an environment without visible processes, operating systems, servers, or virtual machines. From a pricing perspective, you are typically charged for the actual amount of resources consumed and not by pre-purchased capacity. Pros Reduced architectural complexity Simplified packaging and deployment Reduced cost to scale Eliminates the need for system admins Works well with microservice architectures Reduced operational costs Typically decreased time to market with faster releases Cons Performance issues — typically higher latency due to how commute resources are allocated Vendor lock-in (hard to move to a new provider) Not efficient for long-running applications Multi-tenancy issues where service providers may run software for several different customers on the same server Difficult to test functions locally Different FaaS implementations provide different methods for logging in functions AWS Lambda Amazon’s take on serverless comes in the form of AWS Lambda. AWS Lambda lets you run code without provisioning or managing servers — while you only pay for your actual usage. With Lambda, you can run code for virtually any type of application or backend service — Lambda automatically runs and scales your application code. Moreover, you can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Websockets A WebSocket provides a long-lived connection for exchanging messages between client and server. Messages may flow in either direction for full-duplex communication. A client creates a WebSocket connection to a server, using a WebSocket client library. WebSocket libraries are generally available in every language, and of course browsers support it natively using the WebSocket JavaScript object. The connection negotiation uses an HTTP-like exchange, and a successful negotiation is indicated with status code 101. After the negotiation response is sent, the connection remains open to be used for exchanging message frames in either binary or unicode string format. Peers may also exchange close frames to perform a clean close. Building AWS IoT Websockets Function-as-a-service backends, such as AWS Lambda, are not designed to handle long-lived connections on their own. This is because the function invocations are meant to be short-lived. Lambda is designed to integrate with services such as AWS IoT to handle these types of connections. AWS IoT Core supports MQTT (either natively or over WebSockets), a lightweight communication protocol specifically designed to tolerate intermittent connections. 📷AWS IoT Core Site However, this approach alone will not give you access to the raw protocol elements — and will not allow you to build a pure Lambda-powered API (if that is your intended use case). If you want this access, then you need to take a different approach. Building Lambda-Powered WebSockets with Fanout You can also build custom Lambda-powered WebSockets by integrating a service like Fanout — a cross between a message broker and a reverse proxy that enables realtime data push for apps and APIs. With these services together, we can build a Lambda-powered API that supports plain WebSockets. This approach uses GRIP, the Generic Realtime Intermediary Protocol — making it possible for a web service to delegate realtime push behavior to a proxy component. This FaaS GRIP library makes it easy to delegate long-lived connection management to Fanout, so that backend functions only need to be invoked when there is connection activity. The other benefit is that backend functions do not have to run for the duration of each connection. The following step-by-step breakdown is meant as a quick configuration reference. You can checkout the Github libraries for Node and Pythonintegrations. 1. Initial Configuration You will first configure your Fanout Cloud domain/environment and set up an API and resource in AWS API Gateway to point to your Lambda function, using a Lambda Proxy Integration. 2. Using Websockets Whenever an HTTP request or WebSocket connection is made to your Fanout Cloud domain, your Lambda function will be able to control it. To do this, Fanout converts incoming WebSocket connection activity into a series of HTTP requests to your backend. 3. You’ve Got Realtime You now have a realtime WebSockets driven by a Lambda function! An Example This Node.js code implements a WebSocket echo service. I recommend checking out the full FaaS GRIP library for a step-by-step breakdown, and for instructions on implementing HTTP long polling and HTTP streaming. var grip = require('grip'); var faas_grip = require('faas-grip'); exports.handler = function (event, context, callback) { var ws; try { ws = faas_grip.lambdaGetWebSocket(event); } catch (err) { callback(null, { statusCode: 400, headers: {'Content-Type': 'text/plain'}, body: 'Not a WebSocket-over-HTTP request\n' }); return; } // if this is a new connection, accept it if (ws.isOpening()) { ws.accept(); } // here we loop over any messages while (ws.canRecv()) { var message = ws.recv(); // if return value is null, then the connection is closed if (message == null) { ws.close(); break; } // echo the message ws.send(message); } callback(null, ws.toResponse()); }; Overall, if you‘re not looking for full control over your raw protocol elements, then you may find it easier to try a Lambda/AWS IoT configuration. If you need more WebSocket visibility and control, then the Lambda+Fanout integration is probably your best bet.
More actions