2020年8月31日星期一

Install Docker On Ubuntu

STEP1. For updating the OS of ubuntu
sudo apt update

STEP2.  Install a few prerequisite packages which let apt use packages over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

STEP3. Add the GPG key for the official Docker repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

STEP4. Add the Docker repository to APT sources:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

STEP5. Update the package database with the Docker packages from the newly added repo:

sudo apt update

STEP6. Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:

apt-cache policy docker-ce

STEP7. Finally, install Docker:

sudo apt install docker-ce

STEP8.Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

sudo systemctl status docker

STEP9.  Avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker ${USER}

STEP10. To apply the new group membership, log out of the server and back in, or type the following:

su - ${USER}

STEP11. Confirm that your user is now added to the docker group by typing:

id

STEP12.If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

sudo usermod -aG docker ${USER}

STEP13. Start Docker Server.  

sudo service docker start

STEP14. Docker run helloworld  error and resolved method
~$ docker run helloworld
Unable to find image 'helloworld:latest' locally
docker: Error response from daemon: pull access denied for helloworld, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
~$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: lqwangxg
Password:
WARNING! Your password will be stored unencrypted in /home/wangxg/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


 

Proxy Auto-Configuration (PAC) file

Proxy Auto-Configuration (PAC) file Is A Javascript Function


Sample:

function FindProxyForURL(url, host) {
  if(dnsDomainIs(host, "goup.mycompany") 
  || dnsDomainIs(host, "sub.mycompany"){
   return "PROXY  10.221.0.245:8080";
  }else if (isInNet(host, "192.168.0.0", "255.255.252.0")
          ||isInNet(host, "192.168.1.0", "255.255.255.0")
          ||isInNet(host, "10.0.108.0", "255.255.255.0") 
          ||isInNet(host, "10.2.108,0", "255.255.255.0")
          ||isInNet(host, "172.16.108,0", "255.255.255.0") 
          ||isInNet(host, "127.0.0.1", "255.255.255.255")
          ||shExpMatch(host, "*.mycorporation.com")
          ||shExpMatch(host, "local.mycomp.com")
          ||shExpMatch(host, "localhost")){
    return "DIRECT" ; //NO PROXY
  }else{
    return "PROXY 172.16.108.253:8080"; 
  }
}
The parameter of:
  • URL: is requesting destination Url.
  • Host: is the host of URL.
Ref: Proxy_Auto-Configuration_(PAC)_file

Docker/Git Proxy Setting.
HTTPS_PROXY=http://login:password@yourproxy:8080
https_proxy=http://login:password@yourproxy:8080
HTTP_PROXY=http://login:password@yourproxy:8080
http_proxy=http://login:password@yourproxy:8080

2020年8月29日星期六

How to build a image from Dockerfile

1. Build a docker image by Dockerfile  (without file extension) 

$ docker build .

2. Build image by a Dockfile with filepath

$ docker build -f /path/to/a/Dockerfile .

3.Build an image with tag name 

$ docker build -t shykes/myapp .
# or build duplicated images 
$ docker build -t shykes/myapp:1.0.2 -t shykes/myapp:latest .
Build cache is only used from images that have a local parent chain. This means that these images were created by previous builds or the whole chain of images was loaded with docker load. If you wish to use build cache of a specific image you can specify it with --cache-from option. Images specified with --cache-from do not need to have a parent chain and may be pulled from other registries.

BuildKit

Starting with version 18.09, Docker supports a new backend for executing your builds that is provided by the moby/buildkit project. The BuildKit backend provides many benefits compared to the old implementation. For example, BuildKit can:

  • Detect and skip executing unused build stages
  • Parallelize building independent build stages
  • Incrementally transfer only the changed files in your build context between builds
  • Detect and skip transferring unused files in your build context
  • Use external Dockerfile implementations with many new features
  • Avoid side-effects with rest of the API (intermediate images and containers)
  • Prioritize your build cache for automatic pruning

To use the BuildKit backend, you need to set an environment variable DOCKER_BUILDKIT=1 on the CLI before invoking docker build.

4. Docker Languages Type:

# Comment
INSTRUCTION arguments


5. Go into the container of docker.

docker exec -it  <ContainerID> /bin/bash

*on alpine, no bash but sh. /bin/sh will be ok

6. docker -v 

docker run -it -v c:\Data:c:\shareddata microsoft/windowsservercore powershell

 



2020年8月28日星期五

Docker memo with best practices of dockerfile

1.getting-started 

docker run -d -p 80:80 docker/getting-started

2.Create a Dockerfile. without extension.

FROM node:12-alpine

WORKDIR /app COPY . . RUN yarn install --production CMD ["node", "src/index.js"]

3. Build Dockerfile 

docker build -t getting-started .

-t: name a tag

4. Start a App Container

docker run -dp 3000:3000 getting-started 

-d: detached. run on backgroud
-p: outerport : innerport

5. Remove Old Container 

docker ps

docker stop <ContainerID>

docker rm <ContainerID>

Or -------------

docker rm -f <ContainerID>

6. Tag Image Before Push the image to Repository

 docker tag image-name  userid_of_repository/image-name

7. Push The Image To Repository.

docker push userid_of_repository/image-name

8. Run ubuntu for Running Container

docker run -it ubuntu ls /

-i: interactive 
*without -d as running in backgound, after the result of ls, the container is done.

9. Coming into Container, and run bash.

docker exec <container-id> cat /data.txt

10.Create a Container volume

docker volume create  db_folder

11.Mount Volume 

docker run -dp 3000:3000 -v db_folder:/etc/data  getting-started

-v: mount the db_folder to /etc/data folder of container.
     Any changed of db_folder will be knowned by host and container.  

12.Show Info Of Volume

docker volume inspect db_folder

[ { "CreatedAt": "2020-08-28T02:18:36Z", "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/db_folder/_data", "Name": "db_folder", "Options": {}, "Scope": "local" } ]

13. Make a dev-mode Container

docker run -dp 3000:3000  `  #in bash shell use \ for continue next row
 -w  /app -v "$(pwd):/app"`  #in power shell use` for continue next row  
 node:12-alpine `
 sh -c "yarn install && yarn run dev" #use yarn(like npm) to install and run. 

14.Show logs of Container

docker logs -f <container_id>

$ nodemon src/index.js [nodemon] 1.19.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* [nodemon] starting `node src/index.js` Using sqlite database at /etc/todos/todo.db Listening on port 3000

15.Multi-Container Apps

Caustion: each container should do one thing and do it well.

16. Create the network

docker network create todo-app

17.Start a MySQL container and attach it the network.

docker run -d \    #in bash shell, \ for continue next row. 
  --network todo-app --network-alias mysql \
  -v todo-mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=todos \
  mysql:5.7

18. To Confirm the mysql database up and running.

docker exec -it <mysql-container-id> mysql -p
when the password prompt comes up, type in secret. then
mysql> SHOW DATABASES;

You should see output that looks like this:

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| todos              |
+--------------------+
5 rows in set (0.00 sec)

19. Use nicolaka/netshoot container(which ships a lot of tools), for troubleshooting or debugging networking issues.

docker run -it  --network todo-app nicolaka/netshoot

Inside the container, we're going to use the dig command(DNS tool)
dig mysql 

And you'll get an output like this...

; <<>> DiG 9.14.1 <<>> mysql
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32162
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mysql.             IN  A

;; ANSWER SECTION:
mysql.          600 IN  A   172.23.0.2

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Tue Oct 01 23:47:24 UTC 2019
;; MSG SIZE  rcvd: 44

20. Run the App with MySQL on dev-mode

First there will be some enviroment variables set.

MYSQL_HOST
MYSQL_USER
MYSQL_PASSWORD
MYSQL_DB


docker run -dp 3000:3000 \ -w /app -v "$(pwd):/app" \ --network todo-app \ -e MYSQL_HOST=mysql \ -e MYSQL_USER=root \ -e MYSQL_PASSWORD=secret \ -e MYSQL_DB=todos \ node:12-alpine \ sh -c "yarn install && yarn run dev"

21. Using Docker Compose.--Create A Compose File.

The docker-compose.yml  contents:
---------------------------------------------

version: "3.7"

services: app: image: node:12-alpine command: sh -c "yarn install && yarn run dev" ports: - 3000:3000 working_dir: /app volumes: - ./:/app environment: MYSQL_HOST: mysql MYSQL_USER: root MYSQL_PASSWORD: secret MYSQL_DB: todos mysql: image: mysql:5.7 volumes: - todo-mysql-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: todos volumes: todo-mysql-data:

22. Running Application Stack By docker-compose.yml file

docker-compose up -d 
-d: run everything in the background

23. Image Building Best Practices

1.Use the docker image history command to see the layers in the getting-started image you created earlier in the tutorial.

You should get output that looks something like this (dates/IDs may be different).

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
a78a40cbf866        18 seconds ago      /bin/sh -c #(nop)  CMD ["node" "src/index.j…    0B                  
f1d1808565d6        19 seconds ago      /bin/sh -c yarn install --production            85.4MB              
a2c054d14948        36 seconds ago      /bin/sh -c #(nop) COPY dir:5dc710ad87c789593…   198kB               
9577ae713121        37 seconds ago      /bin/sh -c #(nop) WORKDIR /app                  0B                  
b95baba1cfdb        13 days ago         /bin/sh -c #(nop)  CMD ["node"]                 0B                  
<missing>           13 days ago         /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B                  
<missing>           13 days ago         /bin/sh -c #(nop) COPY file:238737301d473041…   116B                
<missing>           13 days ago         /bin/sh -c apk add --no-cache --virtual .bui…   5.35MB              
<missing>           13 days ago         /bin/sh -c #(nop)  ENV YARN_VERSION=1.21.1      0B                  
<missing>           13 days ago         /bin/sh -c addgroup -g 1000 node     && addu…   74.3MB              
<missing>           13 days ago         /bin/sh -c #(nop)  ENV NODE_VERSION=12.14.1     0B                  
<missing>           13 days ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           13 days ago         /bin/sh -c #(nop) ADD file:e69d441d729412d24…   5.59MB   Each of the lines represents a layer in the image. The display here shows the base at the bottom with the newest layer at the top. Using this, you can also quickly see the size of each layer, helping diagnose large images.       
Layer Caching
An important lesson to learn to help decrease build times for your container images is using Layer Caching.
Once a layer changes, all downstream layers have to be recreated as well
So, Restructure our Dockerfile to help support the caching of the dependencies. For Node-based applications, we only recreate the yarn dependencies if there was a change to the package.json.

  1. Update the Dockerfile to copy in the package.json first, install dependencies, and then copy everything else in.

    FROM node:12-alpine
    WORKDIR /app
    COPY package.json yarn.lock ./
    RUN yarn install --production
    COPY . .
    CMD ["node", "src/index.js"]
    
  2. Create a file named .dockerignore in the same folder as the Dockerfile with the following contents.

    node_modules
    

    .dockerignore files are an easy way to selectively copy only image relevant files

  3. Build a new image using docker build

docker build -t getting-started .

Multi-Stage Builds
  • Separate build-time dependencies from runtime dependencies
  • Reduce overall image size by shipping only what your app needs to run
  • Each instruction creates one layer:

    • FROM creates a layer from the ubuntu:18.04 Docker image.
    • COPY adds files from your Docker client’s current directory.
    • RUN builds your application with make.
    • CMD specifies what command to run within the container.
  • Minimize the number of layers
  • Only the instructions RUNCOPYADD create layers.
    Other instructions create temporary intermediate images and do not increase the size of the build.

  • Where possible, use multi-stage builds, and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.

Pipe Dockerfile through stdin

docker build -t myimage:latest -<<EOF
FROM busybox
RUN echo "hello world"
EOF

BUILD FROM A LOCAL BUILD CONTEXT, USING A DOCKERFILE FROM STDIN

docker build [OPTIONS] -f- PATH

Sort multi-line arguments

Here’s an example from the buildpack-deps image:

RUN apt-get update && apt-get install -y \
    aufs-tools \
    automake \
    build-essential \
    curl \
    dpkg-sig \
    libcap-dev \
    libsqlite3-dev \
    mercurial \
    reprepro \
    ruby1.9.1 \
    ruby1.9.1-dev \
    s3cmd=1.1.* \
 && rm -rf /var/lib/apt/lists/*

Avoid and or copy, using curl from URL.  Avoid doing things like:

ADD http://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all

And instead, do something like:

RUN mkdir -p /usr/src/things \
    && curl -SL http://example.com/big.tar.xz \
    | tar -xJC /usr/src/things \
    && make -C /usr/src/things all

Maven/Tomcat Example

When building Java-based applications, a JDK is needed to compile the source code to Java bytecode. However, that JDK isn't needed in production. Also, you might be using tooks like Maven or Gradle to help build the app.Those also aren't needed in our final image, Multi-stage builds help.
FROM maven AS build
WORKDIR /app
COPY  .  .
RUN mvn package
#The above will not be import to the image. Just using them to compile java.
FROM tomcat 
COPY --from=build /app/target/file.war  /usr/local/tomcat/webapps

React Example
When building React applications, we need a Node enviroment to compile the JS code(typically JSX), SASS stylesheets, and more into static HTML, JS and CSS. If we are't doing server-side rendering, we don't even need a Node enviroment for our production build. Why not ship the static resources in a static nginx container?
FROM node:12 AS build WORKDIR /app COPY package* yarn.lock ./ RUN yarn install COPY public ./public COPY src ./src RUN yarn run build #The above is just for compiling to html, js, css.
FROM nginx:alpine COPY --from=build /app/build /usr/share/nginx/html

Here, we are using a node:12 image to perform the build(maximizing layer caching) and then copying the output into an nginx container.