Docker Compose

Compose has commands for managing the whole lifecycle of your application:

  • Start, stop, and rebuild services
  • View the status of running services
  • Stream the log output of running services
  • Run a one-off command on a service

Deploying changes

When you make changes to your app code, remember to rebuild your image and recreate your app’s containers. To redeploy a service called web, use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
docker-compose build web
docker-compose up --no-deps -d web


$ sudo docker.compose build strapi
Building strapi
Step 1/2 : FROM strapi/strapi:3.0.0-beta.19.3-alpine
 ---> 25c6ef17defb
Step 2/2 : COPY ./app /srv/app
 ---> e798677fed49
Successfully built e798677fed49
Successfully tagged strapi3_strapi:latest

$ sudo docker.compose images
   Container       Repository    Tag       Image Id      Size 
--------------------------------------------------------------
strapi3_strapi_1   <none>       <none>   87fc37c874d5   368 MB
$ sudo docker.compose up --no-deps -d strapi
Recreating strapi3_strapi_1 ... done
$ sudo docker.compose ps
      Name                    Command               State           Ports
----------------------------------------------------------------------------------
strapi3_strapi_1   docker-entrypoint.sh strap ...   Up      0.0.0.0:9080->1337/tcp
$ sudo docker.compose images
   Container         Repository      Tag       Image Id      Size 
------------------------------------------------------------------
strapi3_strapi_1   strapi3_strapi   latest   e798677fed49   368 MB
$

This first rebuilds the image for web and then stop, destroy, and recreate just the web service. The –no-deps flag prevents Compose from also recreating any services which web depends on.

Control startup order

Control startup and shutdown order in Compose

However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.

The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.

The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason. However, if you don’t need this level of resilience, you can work around the problem with a wrapper script:

  • Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections. For example, to use wait-for-it.sh or wait-for to wrap your service’s command:
1
2
3
4
5
6
7
8
9
10
11
version: "2"
services:
  web:
    build: .
    ports:
      - "80:8000"
    depends_on:
      - "db"
    command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
  db:
    image: postgres

Best practices for writing Dockerfiles

Run arbitrary commands inside an existing container

1
docker exec -it <mycontainer> bash

how to run docker exec on a docker-compose.yml

Why gRPC?

gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

gRPC vs REST

REST vs. gRPC: Battle of the APIs

When to Use What: REST, GraphQL, Webhooks, & gRPC

Is gRPC better than REST? Where to use it?

REST, GraphQL, gRPC 如何选型

REST

REST 也许是最通用,也是最常用的接口设计方案,它是 无状态的,以资源为核心,针对如何操作资源定义了一系列 URL 约定,而操作类型通过 GET POST PUT DELETE 等 HTTP Methods 表示。

gRPC

RPC 主要用来做服务器之间的方法调用,影响其性能最重要因素就是 序列化/反序列化 效率。

gRPC 是对 RPC 的一个新尝试,最大特点是使用 protobufs 语言格式化数据,进一步提高了序列化速度,降低了数据包大小。

作为代价,双方都要知道接口定义规则才能序列化/反序列化。

1
也有一些额外手段将 gRPC 转换为 http 服务,让网页端也享受到其高效、低耗的好处。但是不要忘了,RPC 最常用的场景是 IOT 等硬件领域,网页场景也许不会在乎节省几 KB 的流量。

Video course

The complete gRPC course [Protobuf, Go, Java]

gRPC on .NET Core

Introduction to gRPC on .NET Core

The main benefits of gRPC are:

  • Modern, high-performance, lightweight RPC framework.
  • Contract-first API development, using Protocol Buffers by default, allowing for language agnostic implementations.
  • Tooling available for many languages to generate strongly-typed servers and clients.
  • Supports client, server, and bi-directional streaming calls.
  • Reduced network usage with Protobuf binary serialization.

These benefits make gRPC ideal for:

  • Lightweight microservices where efficiency is critical.
  • Polyglot systems where multiple languages are required for development.
  • Point-to-point real-time services that need to handle streaming requests or responses.

gRPC not supported on Azure App Service

ASP.NET Core gRPC is not currently supported on Azure App Service or IIS. The HTTP/2 implementation of Http.Sys does not support HTTP response trailing headers which gRPC relies on. For more information, see this GitHub issue.