Amazon Web Services Feed
The versatility of gRPC, an open source high-performance RPC framework
During the birth of the computer age, the first computers were the size of a room and all computations had to be run on the same computer. Once the first network was switched on, creating protocols for those computers to communicate became necessary. A protocol is defined as the set of rules governing the exchange or transmission of data between devices. With the evolution of technology, more protocols have been created that enable multiple computers to communicate in various formats.
The HTTP protocol was ratified by the IETF in 1996 and rapidly became the standard to use for internet-enabled applications. HTTP is built upon the TCP protocol (except for HTTP/3 currently in development) by providing a standard that can be used for most web servers available today, whether packaged or custom-built. HTTP has been revised over the years to add features such as compression, caching, and authentication.
gRPC is an emerging open source protocol and a successor to HTTP designed to focus around the contract between applications and to let other protocols handle traffic routing. gRPC improves upon its underlying protocols by:
- Using HTTP/2, enabling more features such as compression and stream prioritization.
- Using protobuf IDL, which (versus JSON) is binary-encoded and thus takes advantage of HTTP/2 binary framing.
- Supporting bi-directional streaming (separate streams for client-initiated and server-initiated traffic).
- Spanning or multiplexing connections as necessary—many gRPC packets can binpack into an HTTP/2 packet, or a single gRPC packet can span multiple HTTP/2 packets.
Implementing TCP vs. HTTP vs. gRPC
Networking professionals often refer to various protocols by their layer on the Open Systems Interconnect Model (OSI Model). This conceptual model is often represented as a pyramid, with each layer stacking on top of the previous layers. Each layer is able to relegate some responsibility to the lower layers and focus on its own advantages.
Consider that if each layer builds on the previous layers, we can also assume that the code required for our application decreases as we adopt higher-layer protocols into our application.
TCP application
Suppose we want to connect to another TCP back end. In most programming languages (Golang and port 50051 were arbitrarily selected for these examples), this process is trivial:
conn, err := net.Dial("tcp", "mybackend.com:50051")
We can also send data to the back end and listen for a response:
name := "world"
fmt.Fprintf(conn, "%sn", name)
response, err := bufio.NewReader(conn).ReadString('n')
fmt.Printf("Received from backend: %sn", response)
This approach works, but it only covers the most basic use case. We must rely on both the client and server to understand the context of the data sent, and the TCP protocol requires us to maintain a “connection” on both the client and the server. We also must account for any disconnects and understand whether we need to maintain that connection after the data has been received. Also, if the server has multiple functions, the server must decide which function to run based upon the send data alone.
We can add some context inside our data to make it easier for the server to understand the response. Let’s use JSON to encode/decode our data. We’ll create two structs to hold our payloads:
type HelloRequest struct { Name string `json:"name"`
} type HelloReply struct { Message string `json:"message"`
}
Then, we serialize our request to JSON and deserialize our response from JSON. Now our code looks like:
conn, err := net.Dial("tcp", "mybackend.com:50051")
name := "world"
request, err := json.Marshal(HelloRequest{ Name: name,
})
fmt.Fprintf(conn, request)
status, err := bufio.NewReader(conn).ReadString('n')
var response HelloReply
err = json.Unmarshal(status, &response)
fmt.Printf("Received from backend: %sn", response.Message)
Using encoding allows us to create complex structures to send between services, but it doesn’t provide any error handling or multi-function support. Adding more features to this code will surely lengthen the code and add complexity. Instead of adding the features, we can shift our code from TCP to HTTP.
HTTP application
Suppose we want to retrieve text from a web page on the internet using this same application. We would need to format the text we send to work with the protocol and provide metadata about that connection. Using TCP for negotiation gives us the features of HTTP but also requires us to determine how to translate this output. Was our request successful? How do we determine the difference between the metadata and the data?
Fortunately, we can use HTTP protocol handlers to accomplish this same goal:
name := "world"
request, err := json.Marshal(HelloRequest{ Name: name,
}) resp, err := http.Post("http://mybackend.com/SayHello", "application/json", bytes.Buffer(request))
decoder := json.NewDecoder(resp.Body)
var response HelloReply
err := decoder.Decode(&response)
fmt.Printf("Response from backend: %sn", response.Message)
Although this example doesn’t seem like a significant change, it incorporates all the features of the protocol into the code. We did not have to specify the protocol version or connection status and relied on the http
package to properly format our request. The http
package also managed the TCP connection. The resulting response also contains metadata about the transaction, such as the status of the response, which can be used for error handling in the client code.
HTTP simplified the network connection for us, but does our application need to be aware of the network connection at all? Let’s assume that our HTTP call was part of a SayHello
function, such as:
func SayHello(backend, name string) (string, error) { request, err := json.Marshal(HelloRequest{ Name: name, }) if err != nil { return (nil, err) } resp, err := http.Post(backend+"/SayHello", "application/json", bytes.Buffer(request)) if err != nil { return (nil, err) } if resp.StatusCode >= 400 { return (nil, processErrCode(resp)) } decoder := json.NewDecoder(resp.Body) var response HelloReply err := decoder.Decode(&response) if err != nil { return (nil, err) } return response, nil
} func main() { backend := "http://mybackend.com" response, err := SayHello(backend, "world") fmt.Printf("Response from backend: %sn", response.Message)
}
This function is standard across our use case, but it couples the server’s location and function call together and requires similar functions to be created for each call to the back end.
gRPC application
We can simplify this standardized function by allowing gRPC to implement the function call. gRPC includes code generators for many common programming languages (including Golang for our examples) that start with a protobuf definition file, such as:
package helloworld; service Greeter{ // Sends a message rpc SayHello(HelloRequest) returns (HelloReply) {}
} // The request message containing the user's name.
message HelloRequest { string name = 1;
} // The response message containing the greetings
message HelloReply { string message = 1;
}
gRPC then uses a generator to create the helloworld
package containing our Greeter
service, which includes the SayHello
function. Instead of having to write the function, we can now import the package and use that generated function to get our data.
import "helloworld" func main() { conn, err := grpc.Dial("mybackend.com:50051") client := helloworld.NewGreeterClient(conn) response, err := client.SayHello(context.Background(), &helloworld.HelloReqeust{ Name: "world", }) fmt.Printf("Response from backend: %sn", response.Message)
}
Except for the grpc.Dial
(which is the minimal “network configuration” required), there is no protocol negotiation or difference in this gRPC application versus a function call inside our application. This is a strong pattern that allows developers to focus on the business logic of their code without having to understand the implementation on the server, and it still allows us to have our application call a back end across a network. Additionally, we do not have to recreate our data structures—those data structures are generated and included in the package.
In short, the protocol we select determines how much programming is required to get to the same feature set, with TCP taking the most programming and gRPC taking the least.
- For our code to communicate over TCP, the code must tell the server how to establish the TCP connection and send/receive the data.
- For our code to communicate over HTTP, the code must establish the HTTP connection and know how to send/receive the data.
- For our code to communicate over gRPC, the code calls the server function as an extension of its own code. The connection for gRPC is managed outside of the protocol and not necessary to implement the function.
From this example, the greeter server implements the SayHello
function but does not call it, and the greeter client calls SayHello
without implementing it, all using gRPC.
Using gRPC across different programming languages
gRPC’s generator allows creation of the boilerplate code that implemented our SayHello
function in Golang, but gRPC’s generator also includes support for many different languages (see Supported languages and platforms). The above examples show a Golang implementation of a client. The generator also creates the boilerplate code for implementing the greeter server. The full example is in the grpc-examples repository, but the concept is to create a struct/class that implements our SayHello
function as if that function would run locally. This is the greeter server implementation in Go:
type server struct { helloworld.UnimplementedGreeterServer
} func (s *server) SayHello(ctx context.Context, in *helloworld.HelloRequest) (*helloworld.HelloReply, error) { log.Printf("Received: %v", in.GetName()) return &helloworld.HelloReply{Message: "Hello " + in.GetName()}, nil
}
We then create a listener and gRPC server.
func main() {
lis, _ := net.Listen("tcp", port) s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) s.Serve(lis)
}
Commonly, separate development teams are responsible for implementing the client and server, and each team may prefer a different programming language. Suppose our client team prefers Python. gRPC’s generator can also generate the class structure in Python, and we simply need to call the function.
with grpc.insecure_channel(endpoint) as channel: stub = helloworld_pb2_grpc.GreeterStub(channel) response = stub.SayHello(helloworld_pb2.HelloRequest(name="world"))
print("Received from backend: " + response.message)
Despite using two different programming languages, we can now have our greeter communicate from client to server in a standardized and language-agnostic way.
To test, from the grpc-examples folder, start the greeter server (written in Go) by running:
go run examples/helloworld/greeter_server/main.go
In a separate terminal, run the greeter client (written in Python) by running the following (assuming Python 3+):
python examples/helloworld/greeter_client_python/index.py world
The Python client takes the remaining arguments and passes them as name. The client also defaults to localhost:50051
if no other back end is provided.
The greeter client will output:
Received from backend: Hello world
Back in the first terminal for the greeter server, the output (showing a generic timestamp) is:
YYYY/MM/dd hh:mm:ss Received: world
The test shows that our client and server are communicating over gRPC using different programming languages.
Using gRPC across different compute types
Because gRPC abstracts the network implementation and focuses on function calls, we can also apply this same pattern across different types of compute. All of the above code works on a local workstation, but it can also be run on instances using Amazon Elastic Compute Cloud (Amazon EC2), containers using Amazon Elastic Container Service (Amazon ECS)/Amazon Elastic Kubernetes Service (Amazon EKS), or functions using AWS Lambda.
The grpc-examples repository includes an AWS Cloud Development Kit (AWS CDK) construct that creates an Amazon ECS service for the greeter server (written in Go) and an AWS Lambda function for the greeter client (written in Python). For demo purposes, the Amazon ECS service and Lambda function are both Amazon Virtual Private Cloud (Amazon VPC)-enabled and contained within the same VPC, with a security group allowing access.
Navigate to the walkthroughs/compute-options folder and run npm run build && cdk deploy --require-approval never
(ensuring that npm
and cdk
are installed as prerequisites). Once the AWS CloudFormation stack is deployed, AWS CDK will specify the test command to run, highlighted below.
// Output
... Outputs:
ComputeOptionsStack.TestClientBash = aws lambda invoke --function-name ComputeOptionsStack-greeterclient01ABCDEF-ABCDEFGHIJKL --payload '{"name": "world"}' --log-type Tail --query LogResult --output text /dev/null | base64 -d
ComputeOptionsStack.TestServerBash = aws logs get-log-events --log-group-name ComputeOptionsStack-greeterservertaskdefgreeterserverLogGroup01ABCDEF-ABCDEFGHIJKLM --log-stream-name $(aws logs describe-log-streams --log-group-name ComputeOptionsStack-greeterservertaskdefgreeterserverLogGroup01ABCDEF-ABCDEFGHIJKLM --query "logStreams[0].logStreamName" --output text) --query "events[*].[message]" --output text Stack ARN:
arn:aws:cloudformation:region:account_id:stack/ComputeOptionsStack/01234567-89ab-cdef-0123-456789abcdef
Run the command (use the one from the above output):
// Bash
aws lambda invoke --function-name ComputeOptionsStack-greeterclient01ABCDEF-ABCDEFGHIJKL --payload '{"name": "world"}' --log-type Tail --query LogResult --output text /dev/null | base64 -d
to get the logs from our function. Note the response from the greeter:
// Output
START RequestId: 01234567-89ab-cdef-0123-456789abcdef Version: $LATEST
Greeter client received: Hello world
END RequestId: 01234567-89ab-cdef-0123-456789abcdef
REPORT RequestId: 01234567-89ab-cdef-0123-456789abcdef Duration: 12.80 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 77 MB
Check the greeter server logs using Amazon CloudWatch.
// Bash
aws logs get-log-events --log-group-name ComputeOptionsStack-greeterservertaskdefgreeterserverLogGroup01ABCDEF-ABCDEFGHIJKLM --log-stream-name $(aws logs describe-log-streams --log-group-name ComputeOptionsStack-greeterservertaskdefgreeterserverLogGroup01ABCDEF-ABCDEFGHIJKLM --query "logStreams[0].logStreamName" --output text) --query "events[*].[message]" --output text
The logs show that our greeter server received the message.
// Output
YYYY/MM/dd hh:mm:ss Received: world
Conclusion
gRPC makes it easy to build functions in code that can be separated by networks, programming languages, and even different types of compute resources. Instead of thinking about how to connect different services, we can build the services with a defined contract and write code to call/implement those functions. Using gRPC means services only need to be called by name—how to get to those services can be handled by transporting them over gRPC and coupling them into a network or service mesh.
Further reading