@josheir That depends on what “this” means :-)
nginx
There are some pretty easy quick starts for nginx. On the latest service I spun up, I made a myservice.conf
file and dropped it into /etc/nginx/sites-enabled
:
server {
listen 3001 default_server;
listen [::]:3001 default_server;
# make these really drastic, because the Python process is serial.
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 10;
lingering_timeout 10;
send_timeout 10;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_send_timeout 10;
root /var/www/html;
index index.html;
server_name _;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
This says “nginx listens on port 3001 and forwards incoming connections to a local process listening on port 3000.” (This hosts a serial Python process that talks to a GPU to do some machine learning snake oil – hence the complaint about Python in the comment above.)
Docker
There are some somewhat deeper tutorials for docker, if you haven't gotten into that. On the one hand, it's how essentially all services are deployed these days, but on the other hand, it's a bit different from how “ship binaries on the hardware/vm” used to work ten years ago.
You need to install two pieces: The docker
build-time on your development and CI machines, and the docker
runtime on your actual servers. Docker uses a Dockerfile
(similar to a makefile) to control how to package up the software in question. It can look something like:
# Simple Dockerfile to copy bin/myservice into a self-contained container
FROM ubuntu:latest
RUN apt-get update
RUN apt install -y -V ca-certificates lsb-release wget # add other dependencies maybe
COPY bin/myservice myservice
CMD ["myservice", "--port=8000", "--log-level=3", "--database=my.database.com:5432", "--datadir=/var/data"]
EXPOSE 8000
You build this with something like docker build -t myservice .
If you don't use static linking, you need to also install the dependencies in the Dockerfile. Each invocation is as if you came to a brand new, uninstalled system. Caching makes the docker build
process acceptable over time. (But all the cool kids are using static linking to not have to worry about shared libraries anymore.)
You then run it locally with something like:
docker run -it --rm -n myservice -p 8000:4321 -v /home/me/mydata:/var/data myservice
Or, if you want to start it in the background:
docker run -d -n myservice -p 8000:4321 -v /home/me/mydata:/var/data myservice
This says “what the inside of the container thinks is port 8000
should be port 4321
on the host, and what the inside of the container thinks is path /var/data
, should be path /home/me/mydata
on the host.”
A lot to wrap your head around! The CPU runs microcode to implement an instruction set, the instruction set runs a hypervisor, the hypervisor runs a VM, the VM runs a kernel, the kernel runs a container host, the container runs a service. (If you're lucky, the service is written in .NET or Java and has its own second VM, that in turn runs the service …) As a developer, 90% of the time, you can ignore the layers you're not concerned about, but in live use cases, it helps a lot to have an idea of what's going on all through the stack. Both for security, performance, and lack of surprises :-)
http library
Each http library comes with its own “quick start” instructions, of varying quality depending on the library. I've personally used https://github.com/yhirose/cpp-httplib and it's alright. It has reasonable getting-started documentation; nothing much to say.
running at work
“At work” depends on where you work and how it's set up! Modern software deployment organizations have an “infrastructure” team, which provides some kind of host and service routing fabric, and you then build your own containers and provide manifests to the infrastructure team. Their job is to keep the fabric/hardware up; your role is to build good containers that do what they're supposed to. If you're alone, you'll probably use a simple container service like AWS Elastic Container Service, or Amazon Lightsail, or Google Container Engine. Once you want “more than one” you might go to a hosted Kubernetes service like AWS EKS or Gcloud GKE. Those take the job of the “infrastructure team” and you deploy to them typically by describing all the bits you need (database containers, cache containers, service containers, …) in yaml files, and have the yaml files point at some docker repository where it gets the container images.
Even if your “at work” is just providing bare hosts or VMs, they will likely have some reverse proxy that needs to be configured for your service to be discovered, but how that works, depends entirely on who or what the implementation is. It's useful to understand the bits and pieces that all have to come together to make it work, though; it gives you some concepts and words to look for to keep it all together. No matter whether you use F5 bigiron or envoy or nginx or perlbal, they all end up having to do the same thing, because that's how the internet works :-)
basic server skills
The modern internet is built on a strata of tons of underlying technologies. It helps to know how each of them work, at least a little bit. Anything from “the HTTP protocol" to “kernel resource virtualization” will absolutely affect how your service runs. Unfortunately, I don't know of great materials for “the whole stack” – it's either newbie stuff that glosses over too many details, or it's for-experts deep dives that are aimed at people who are going to do a lot of work at each layer. But a few days of reading web pages, watching a few youtube videos on 1.75x speed, and, most importantly, playing around by yourself, usually helps!