The power of Docker is the container. Containers are a concept that has appeared on Linux over the years. It refers to one or more processes that are isolated from the rest of the system, simply understood as isolating the process from the real system, basically it is like a virtual machine. Containers have all the files they need to run those processes independent of the system, which is why they have become a very commonly used tool for deployments.
Join the channel Telegram of the AnonyViet 👉 Link 👈 |
Docker has brought the power of these Linux containers to everyone and that’s why they are used in different production environments.
Docker solves a problem many developers have to deal with when working on multiple systems. containers make it possible for applications to run consistently on any operating system without worrying about errors of different development environments and configurations.
What is Docker?
Now in short: docker will help to group installed applications, when you want to deploy a certain project, just call that docker and don’t need to reinstall some applications already in docker. For example: you have pre-installed php, apche, ftp…, later if you want to deploy another project, just remove the docker guy, don’t have to reinstall php, apche, ftp… half.
The general concept of Docker is like this:
Docker is an open source tool that handles the lifecycle of containers. It is used to simplify the way you build and deploy during development. That means you can have containers with all the dependencies you need to run your application and manage it until the end of development.
Depending on your needs, Docker containers can be used to replace virtual machines. Virtual machines use more resources than containers because they need a virtual copy of the operating system and the hardware it needs to run. And it also takes up quite a bit of RAM.
Docker containers only need a virtual copy of the operating system. It will use all the resources of the physical servers, so they don’t need to split the hardware to use as virtual machines.
That means the container is super lightweight and can be used on any system configuration and still be able to run the application correctly when deployed locally.
With Docker, you can use containers to develop locally, then share that container with other developers and use the same container to deploy products. Once everything is in place, you can deploy your application as a container or as a orchestrated service and it will run exactly the way it did locally.
Why should you familiarize yourself with docker?
Containers help solve problems like “it runs on my computer normally”. Developers can share container images, build, and run the same container on different machines. When you can run your code consistently without worrying about your local installation environment, you can develop apps on any machine without having to change a bunch of configurations on that machine. to be identical to your local.
Working with Docker containers also makes it easier to deploy to any environment. You do not have to account for additional resource consumption when using a virtual machine. This will help improve the performance and reliability of your application by giving you a tool that allows you to manage all the changes to the code and the container during development.
How to work with Docker
There are a few key things you need to know when working with Docker: images and containers.
Images
Docker images are templates for creating containers. Docker specifies which packages and preconfigured host environments to use to run your application. Images are created from a set of files used to build container functions.
These files include dependencies, the code for the app, and any other settings you need. There are several ways to create a new image. You can take a running container and change a few things to save it as new images, or you can create new images from scratch by creating a new Dockerfile.
We will take a look at the Docker image below and analyze it. Let’s start by creating a Dockerfile to run a React application.
# pull official base image
FROM node:alpine3.12
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
RUN npm install [email protected] -g
EXPOSE 3000
# add app
COPY . ./
# start app
CMD ["npm", "start"]
At the beginning of each line in this file is a keyword used in Docker to help it understand what to do. In this file, I create the base image for Node to set up the environment I need to run my React app. Then I create the working directory for the container.
This is where the application code will be kept in the container. Then you set the path for where the dependencies will be installed and continue to install the dependencies listed in package.json your. Next, I tell Docker that the container is listening on port 3000. Finally, you add the application to the correct directory and start it.
Now we can build the images with the Docker command:
docker build -t local-react:0.1 .
Don’t forget the “.” at the end of the line! It tells Docker you are building images from files and directories in the current working directory.
container
Now that you have successfully built an image, you can create a container with it. Run your images as a container using this Docker command:
docker run --publish 3000:3000 --detach --name lr local-react:0.1
This command takes your images and runs it as a container. Back to the images, you have already set the port 3000 of the container that is available outside of that container. With –publish, you are forwarding traffic from port 3000 of the system to the container. We must, because otherwise the firewall will prevent all network traffic from reaching your container.
–detach run the container in the background of the terminal. That means it doesn’t take any display input or output. This is a popular option, but you can always reattach the container to the terminal if you need it later. –name allows you to provide a name for the container that you will need for future commands. In this case, the container is created named lr.
You can now access localhost: 3000 and see your application running.
Conclusion
Docker may not be used everywhere, but it’s a popular technology you should know about. It makes development on different systems more convenient.