Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "docker"


Docker: Compose shared networks

  • We have a Docker Compose stack with Jenkins, SonarQube, and PostgresSQL, check the SonarQube: running tests from Jenkins Pipeline in Docker post.
  • Thus, if need to restart a SonarQube – will have to restart all of them including Jenkins, where are jobs running.
  • So the task will be to split those three services into the two Compose files, but leave the communication ability without changing URLs to connect between containers.
  • Will use the external feature here.
  • Compose version 3.5.
  • Otherwise, Docker will create a network with a servceiname_networkname name, which doesn’t look too good.
  • Sore eyes?

save | comments | report | share on


A Gentle Introduction to Kubernetes

  • Until now, we have seen that the Replication Controller and Replica Set are two ways to deploy our container and manage it in a Kubernetes cluster.
  • Since our container will need some environment variables, the best way is to provide them using the Kubernetes deployment definition file.
  • No need to wait for the external IP of the created service, since Minikube does not really deploy a load balancer and this feature will only work if you configure a Load Balancer provider.
  • Originally built at Lyft, Envoy is a high-performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures.
  • Built on the learnings of solutions such as NGINX, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner.

save | comments | report | share on


Docker for front-end developers

  • To fully understand the aforementioned terminologies, let’s set up Docker and run an example.
  • Let’s run a Docker container using Node.js.
  • Since this post is focused on front-end developers, let’s run a React application in Docker!
  • For that, I recommend using the create-react-app CLI, but you can use whatever project you have at hand; the process will be the same.
  • In this file, you’d normally specify the image you want to use, which files will be inside, and whether you need to execute some commands before building.
  • When working in yarn projects, the recommendation is to remove the node_modules from the /app and move it to root.
  • In case you want to keep the changes to the container in your file system, you can use the -v flag and mount the current directory into /app.

save | comments | report | share on


Docker for front-end developers

  • To fully understand the aforementioned terminologies, let’s set up Docker and run an example.
  • Let’s run a Docker container using Node.js.
  • Since this post is focused on front-end developers, let’s run a React application in Docker!
  • For that, I recommend using the create-react-app CLI, but you can use whatever project you have at hand; the process will be the same.
  • In this file, you’d normally specify the image you want to use, which files will be inside, and whether you need to execute some commands before building.
  • When working in yarn projects, the recommendation is to remove the node_modules from the /app and move it to root.
  • In case you want to keep the changes to the container in your file system, you can use the -v flag and mount the current directory into /app.

save | comments | report | share on


Show HN: Crawlab: Open-Source Web Crawler Admin Platform That Runs Any Language

  • Then execute the command below, and Crawlab Master Node + MongoDB + Redis will start up.
  • The architecture of Crawlab is consisted of the Master Node and multiple Worker Nodes, and Redis and MongoDB databases which are mainly for nodes communication and data storage.
  • The frontend app makes requests to the Master Node, which assigns tasks and deploys spiders through MongoDB and Redis.
  • When a Worker Node receives a task, it begins to execute the crawling task, and stores the results to MongoDB.
  • In the mean time, the Master Node synchronizes (deploys) spiders to Worker Nodes, via Redis and MongoDB GridFS.
  • The main functionality of the Worker Nodes is to execute crawling tasks and store results and logs, and communicate with the Master Node through Redis PubSub. By increasing the number of Worker Nodes, Crawlab can scale horizontally, and different crawling tasks can be assigned to different nodes to execute.

save | comments | report | share on


An 8-minute introduction to Kubernetes

  • Nodes provide the available cluster resources for k8s to keep data, run jobs, maintain workloads and creates network routes.
  • Being an orchestrator, controlling many resources of different workloads, k8s manages networking for pods, jobs, and any physical resource that requires communication.
  • This is also the place to configure auto-scaling, where additional replications are created when the system is loaded, as well as scale-in when those resources are no longer required to support the running workload.
  • While the application container itself can be immutable and be replaced with newer versions or healthier instances of themselves, they would need the persistency of their data even with other replications.
  • MaxAvailable — a setting of what percentage (or exact number) of the workload should be available when deploying a new version, 100% meaning “I have 2 containers, keep 2 alive and serving requests throughout the deployment”.

save | comments | report | share on


An 8-minute introduction to Kubernetes

  • Nodes provide the available cluster resources for k8s to keep data, run jobs, maintain workloads and creates network routes.
  • Being an orchestrator, controlling many resources of different workloads, k8s manages networking for pods, jobs, and any physical resource that requires communication.
  • This is also the place to configure auto-scaling, where additional replications are created when the system is loaded, as well as scale-in when those resources are no longer required to support the running workload.
  • While the application container itself can be immutable and be replaced with newer versions or healthier instances of themselves, they would need the persistency of their data even with other replications.
  • MaxAvailable — a setting of what percentage (or exact number) of the workload should be available when deploying a new version, 100% meaning “I have 2 containers, keep 2 alive and serving requests throughout the deployment”.

save | comments | report | share on


What is a Pod in Kubernetes?☸️💡🎉

  • We will define the apiVersion and also instruct the kind of Kubernetes object (like Pod, Deployment, Service, etc.,).
  • The Kubernetes schedule the pods and run them inside the cluster.
  • Here we used another busybox container and wait till hello-mysql pod is up and running.
  • Once the containers are downloaded then the pod is assigned a Node in the Kubernetes cluster.
  • It doesn't matter whether the container is booted up correctly and running all the required process, the Kubernetes assigns Running phase to the pod.
  • The main advantage of Kubernetes is its ability to restart the pods/containers automagically.
  • They allow us to probe by executing a command inside the container, check for TCP connection or HTTP request respectively.
  • When the memory overshoots, Kubernetes simply restart the pod based on the restartPolicy.

save | comments | report | share on


Run a personal Cloud with Traefik, Let's encrypt and Zookeeper

  • To save costs I chose to use "Preemtible VMs" as nodes to power my kubernetes cluster on GKE.
  • According to google's docs: "Preemptible VMs are Google Compute Engine VM instances that last a maximum of 24 hours and provide no availability guarantees." This means the nodes in my kubernetes cluster randomly go down and are never up more than 24h.
  • A concrete example I ran into: The Let's encrypt production API has a rate limit of requesting 5 certificates for the same URL in a week.
  • I run just one replica in here to save costs in my dev setup but I've also scaled it up to three to test if it would stay up 100% of the time even with random nodes going down and everything works fine :).

save | comments | report | share on


Generate jooq classes using docker containers

  • Generate jooq-classes from an in-memory or Adhoc database instead of connecting to prelive/live environments.
  • How to apply all the migrations using liquibase before generating jooq-classes.
  • Generate jooq-classes based on Postgres driver.
  • Jooq supports generating classes connecting to h2 (in-memory database) but not Postgres.
  • We use Postgres mostly and h2 mostly does not support many features Postgres has.
  • Avoid using multiple maven plugins and 100 lines of code instead use one maven plugin.
  • Apply liquibase migrations over the test-container.
  • Generate jooq-classes based for the schema provided.

save | comments | report | share on