Saturday, 26 October 2019

Docker File System













Changes can be done in Layer 4 & 5













Rebuild image can be read only layer and container Layer has Read Write














We can copy the file from rebuild image (Read only) and can be do Copy on Write














Below images represents normal storage which resides at docker host machine













Below image represents bind mount which can be external storage system














Storage drive will be varied based on docker host OS, as Ubuntu will support AUFS file system.



Docker Commands


Ctrl+p+Q is used for exit from running containers. After exit, containers still exists.

Top:
It shows top level process within container



Stop:
This command can be used for stop the container


Remove:
This command can be used for remove the docker container.


Stats:
This command can be used for statistics like CPU, Memory usage of container



Attach:
This command can be used for attach container


Pause
This command can be used for pause the process which are running on container



Unpause:
This command can be used for unpause process which is stopped in container


Kill
This command can be used for kill the process in running container



ncenter:
This method allows one to attach to a container without exiting the container.

Remove stopped containers
docker system prune



Detached
With this mode, docker container will be exists with running mode even if we exit from container.

Port Mapping:

-p is for port like data port and control port.
-p 80:80 container port:host port




Docker images
#docker images 

#docker ps -a
#docker ps

docker run


Login to running container
docker exec -it 26f0edf89ftw cmd
Docker commit
docker commit –m “Application installed”  54f04da39fgf app:1.0
After commit, run the image
docker run -it app:1.0 cmd

Docker File


Introduction to Dockerfile:

Dockerfile is a text which can be used to automate the customization in containers.

FROM----ADD----RUN----CMD----ENTRYPOINT----ENV

FROM: this keyword is used to define the base image, on which we will be building. Eg: FROM Ubuntu

ADD: This keyword is used to add files to the container being built. Eg: ADD <source> <destination in container>    

      Ex: ADD . /var/www/html

COPY – copy the data from current working directory

WORKDIR: changing directory in the container

RUN: This keyword is used to run some operation. Eg: installation
              Ex: RUN yum –y install httpd

CMD: This keyword is used to run commands on the start of the container. These commands run only when there is no argument specified while running the container.

Eg: I want to start my apache service while starting container. So the command whatever we mentioned in CMD, those can be triggered during container start.

              CMD /etc/init.d/httpd start  --command running without arguments.

ENTRYPOINT: This keyword is used to run commands with arguments.

              ENTRYPOINT apachectl –D FOREGROUND

ENV: This keyword is used to add environment variables in container.

              ENV JAVA_HOME /var/java_home/

ONBUILD: this will be used in Parent Image docker file for rebuild and parent image can be used in child  
Dockerfile.

Build a dockerfile:

# docker build <path_of_dockerfile> -t <new_image_name> 

--this will create a new customized container for me.

Docker File:






Docker Swarm

Docker includes swarm mode for natively managing a cluster of Docker Engines called a swarm. You can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior.


Overlay networking and service discovery

You will complete the following steps as part of this lab.

Step 1: Create a new Swarm

In this step you'll initialize a new Swarm, join a single worker node, and verify the operations worked.
  1. Execute the following command on node1.
    node1$ docker swarm init
    Swarm initialized: current node (cw6jpk7pqfg0jkilff5hr8z42) is now a manager.
    To add a worker to this swarm, run the following command:
    
    docker swarm join \
    --token SWMTKN-1-3n2iuzpj8jynx0zd8axr0ouoagvy0o75uk5aqjrn0297j4uaz7-63eslya31oza2ob78b88zg5xe \
    172.31.34.123:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    
  2. Copy the entire docker swarm join command that is displayed as part of the output from the command.
  3. Paste the copied command into the terminal of node2.
    node2$ docker swarm join \
    >     --token SWMTKN-1-3n2iuzpj8jynx0zd8axr0ouoagvy0o75uk5aqjrn0297j4uaz7-63eslya31oza2ob78b88zg5xe \
    >     172.31.34.123:2377
    
    This node joined a swarm as a worker.
    
  4. Run a docker node ls on node1 to verify that both nodes are part of the Swarm.
    node1$ docker node ls
    ID                           HOSTNAME          STATUS  AVAILABILITY  MANAGER STATUS
    4nb02fhvhy8sb0ygcvwya9skr    ip-172-31-43-74   Ready   Active
    cw6jpk7pqfg0jkilff5hr8z42 *  ip-172-31-34-123  Ready   Active        Leader
    
    The ID and HOSTNAME values may be different in your lab. The important thing to check is that both nodes have joined the Swarm and are ready and active.

Step 2: Create an overlay network

Now that you have a Swarm initialized it's time to create an overlay network.
  1. Create a new overlay network called "overnet" by executing the following command on node1.
    node1$ docker network create -d overlay overnet
    0cihm9yiolp0s9kcczchqorhb
    
  2. Use the docker network ls command to verify the network was created successfully.
    node1$ docker network ls
    NETWORK ID          NAME                DRIVER      SCOPE
    1befe23acd58        bridge              bridge      local
    0ea6066635df        docker_gwbridge     bridge      local
    726ead8f4e6b        host                host        local
    8eqnahrmp9lv        ingress             overlay     swarm
    ef4896538cc7        none                null        local
    0cihm9yiolp0        overnet             overlay     swarm
    
    The new "overnet" network is shown on the last line of the output above. Notice how it is associated with the overlay driver and is scoped to the entire Swarm.
    NOTE: The other new networks (ingress and docker_gwbridge) were created automatically when the Swarm cluster was created.
  3. Run the same docker network ls command from node2
    node2$ docker network ls
    NETWORK ID          NAME                DRIVER      SCOPE
    b76635120433        bridge              bridge      local
    ea13f975a254        docker_gwbridge     bridge      local
    73edc8c0cc70        host                host        local
    8eqnahrmp9lv        ingress             overlay     swarm
    c4fb141606ca        none                null        local
    
    Notice that the "overnet" network does not appear in the list. This is because Docker only extends overlay networks to hosts when they are needed. This is usually when a host runs a task from a service that is created on the network. We will see this shortly.
  4. Use the docker network inspect command to view more detailed information about the "overnet" network. You will need to run this command from node1.
    node1$ docker network inspect overnet
    [
        {
            "Name": "overnet",
            "Id": "0cihm9yiolp0s9kcczchqorhb",
            "Scope": "swarm",
            "Driver": "overlay",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": []
            },
            "Internal": false,
            "Containers": null,
            "Options": {
                "com.docker.network.driver.overlay.vxlanid_list": "257"
            },
            "Labels": null
        }
    ]
    

Step 3: Create a service

Now that you have a Swarm initialized and an overlay network, it's time to create a service that uses the network.
  1. Execute the following command from node1 to create a new service called myservice on the overnet network with two tasks/replicas.
    node1$ docker service create --name myservice \
    --network overnet \
    --replicas 2 \
    ubuntu sleep infinity
    
    e9xu03wsxhub3bij2tqyjey5t
    
  2. Verify that the service is created and both replicas are up.
    node1$ docker service ls
    ID            NAME       REPLICAS  IMAGE   COMMAND
    e9xu03wsxhub  myservice  2/2       ubuntu  sleep infinity
    
    The 2/2 in the REPLICAS column shows that both tasks in the service are up and running.
  3. Verify that a single task (replica) is running on each of the two nodes in the Swarm.
    node1$ docker service ps myservice
    ID            NAME         IMAGE   NODE   DESIRED STATE  CURRENT STATE  ERROR
    5t4wh...fsvz  myservice.1  ubuntu  node1  Running        Running 2 mins
    8d9b4...te27  myservice.2  ubuntu  node2  Running        Running 2 mins
    
    The ID and NODE values might be different in your output. The important thing to note is that each task/replica is running on a different node.
  4. Now that node2 is running a task on the "overnet" network it will be able to see the "overnet" network. Run the following command from node2 to verify this.
    node2$ docker network ls
    NETWORK ID          NAME                DRIVER      SCOPE
    b76635120433        bridge              bridge      local
    ea13f975a254        docker_gwbridge     bridge      local
    73edc8c0cc70        host                host        local
    8eqnahrmp9lv        ingress             overlay     swarm
    c4fb141606ca        none                null        local
    0cihm9yiolp0        overnet             overlay     swarm
    
  5. Run the following command on node2 to get more detailed information about the "overnet" network and obtain the IP address of the task running on node2.
    node2$ docker network inspect overnet
    [
        {
            "Name": "overnet",
            "Id": "0cihm9yiolp0s9kcczchqorhb",
            "Scope": "swarm",
            "Driver": "overlay",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "10.0.0.0/24",
                        "Gateway": "10.0.0.1"
                    }
                    ]
            },
            "Internal": false,
            "Containers": {
                "286d2e98c764...37f5870c868": {
                    "Name": "myservice.1.5t4wh7ngrzt9va3zlqxbmfsvz",
                    "EndpointID": "43590b5453a...4d641c0c913841d657",
                    "MacAddress": "02:42:0a:00:00:04",
                    "IPv4Address": "10.0.0.4/24",
                    "IPv6Address": ""
                }
            },      
            "Options": {
                "com.docker.network.driver.overlay.vxlanid_list": "257"
                },
                "Labels": {}
                }
            ]
    
You should note that as of Docker 1.12, docker network inspect only shows containers/tasks running on the local node. This means that 10.0.0.4 is the IPv4 address of the container running on node2. Make a note of this IP address for the next step (the IP address in your lab might be different than the one shown here in the lab guide).

Step 4: Test the network

To complete this step you will need the IP address of the service task running on node2 that you saw in the previous step.
  1. Execute the following commands from node1.
    node1$ docker network inspect overnet
    [
        {
            "Name": "overnet",
            "Id": "0cihm9yiolp0s9kcczchqorhb",
            "Scope": "swarm",
            "Driver": "overlay",
            "Containers": {
                "053abaa...e874f82d346c23a7a": {
                    "Name": "myservice.2.8d9b4i6vnm4hf6gdhxt40te27",
                    "EndpointID": "25d4d5...faf6abd60dba7ff9b5fff6",
                    "MacAddress": "02:42:0a:00:00:03",
                    "IPv4Address": "10.0.0.3/24",
                    "IPv6Address": ""
                }
            },      
            "Options": {
                "com.docker.network.driver.overlay.vxlanid_list": "257"
            },
            "Labels": {}
        }
    ]
    
    Notice that the IP address listed for the service task (container) running on node1 is different to the IP address for the service task running on node2. Note also that they are one the sane "overnet" network.
  2. Run a docker ps command to get the ID of the service task on node1 so that you can log in to it in the next step.
    node1$ docker ps
    CONTAINER ID   IMAGE           COMMAND            CREATED      STATUS         NAMES
    053abaac4f93   ubuntu:latest   "sleep infinity"   19 mins ago  Up 19 mins     myservice.2.8d9b4i6vnm4hf6gdhxt40te27
    <Snip>
    
  3. Log on to the service task. Be sure to use the container ID from your environment as it will be different from the example shown below.
    node1$ docker exec -it 053abaac4f93 /bin/bash
    root@053abaac4f93:/#
    
  4. Install the ping command and ping the service task running on node2.
    root@053abaac4f93:/# apt-get update && apt-get install iputils-ping
    <Snip>
    root@053abaac4f93:/#
    root@053abaac4f93:/#
    root@053abaac4f93:/# ping 10.0.0.4
    PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
    64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.726 ms
    64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.647 ms
    ^C
    --- 10.0.0.4 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 999ms
    rtt min/avg/max/mdev = 0.647/0.686/0.726/0.047 ms
    
    The output above shows that both tasks from the myservice service are on the same overlay network spanning both nodes and that they can use this network to communicate.

Step 5: Test service discovery

Now that you have a working service using an overlay network, let's test service discovery.
If you are not still inside of the container on node1, log back into it with the docker exec command.
  1. Run the following command form inside of the container on node1.
    root@053abaac4f93:/# cat /etc/resolv.conf
    search eu-west-1.compute.internal
    nameserver 127.0.0.11
    options ndots:0
    
    The value that we are interested in is the nameserver 127.0.0.11. This value sends all DNS queries from the container to an embedded DNS resolver running inside the container listening on 127.0.0.11:53. All Docker container run an embedded DNS server at this address.
    NOTE: Some of the other values in your file may be different to those shown in this guide.
  2. Try and ping the myservice name from within the container.
    root@053abaac4f93:/# ping myservice
    PING myservice (10.0.0.2) 56(84) bytes of data.
    64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=1 ttl=64 time=0.020 ms
    64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=2 ttl=64 time=0.041 ms
    64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=3 ttl=64 time=0.039 ms
    ^C
    --- myservice ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2001ms
    rtt min/avg/max/mdev = 0.020/0.033/0.041/0.010 ms
    
    The output clearly shows that the container can ping the myservice service by name. Notice that the IP address returned is 10.0.0.2. In the next few steps we'll verify that this address is the virtual IP (VIP) assigned to the myservice service.
  3. Type the exit command to leave the exec container session and return to the shell prompt of your node1 Docker host.
  4. Inspect the configuration of the myservice service and verify that the VIP value matches the value returned by the previous ping myservice command.
    node1$ docker service inspect myservice
    [
        {
            "ID": "e9xu03wsxhub3bij2tqyjey5t",
            "Version": {
                "Index": 20
            },
            "CreatedAt": "2016-11-23T09:28:57.888561605Z",
            "UpdatedAt": "2016-11-23T09:28:57.890326642Z",
            "Spec": {
                "Name": "myservice",
                "TaskTemplate": {
                    "ContainerSpec": {
                        "Image": "ubuntu",
                        "Args": [
                            "sleep",
                            "infinity"
                        ]
                    },
    <Snip>
            "Endpoint": {
                "Spec": {
                    "Mode": "vip"
                },
                "VirtualIPs": [
                    {
                        "NetworkID": "0cihm9yiolp0s9kcczchqorhb",
                        "Addr": "10.0.0.2/24"
                    }
    <Snip>
    
    Towards the bottom of the output you will see the VIP of the service listed. The VIP in the output above is 10.0.0.2 but the value may be different in your setup. The important point to note is that the VIP listed here matches the value returned by the ping myservice command.
Feel free to create a new docker exec session to the service task (container) running on node2 and perform the same ping service command. You will get a response form the same VIP.

Docker


Docker











Namespaces
Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Docker Engine uses namespaces such as the following on Linux:
  • The pid namespace: Process isolation (PID: Process ID).
  • The net namespace: Managing network interfaces (NET: Networking).
  • The ipc namespace: Managing access to IPC resources (IPC: InterProcess Communication).
  • The mnt namespace: Managing filesystem mount points (MNT: Mount).
  • The uts namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System).

Control groups
Docker Engine on Linux also relies on another technology called control groups(cgroups). A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints. For example, you can limit the memory available to a specific container.
Union file systems
Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers. Docker Engine can use multiple UnionFS variants, including AUFS, btrfs, vfs, and DeviceMapper.
Container format
Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.

Docker Engine

Docker Engine has 3 functionalities 
Docker deamon which can be running in background and it manages docker objects like containers, networks, storage
REST API is an interface and which will talk to Deamon and provide an instruction 
Docker CLI can be in anywhere in a network

Docker uses Namspaces to provides isolate workspace between containers such as

When Linux system Boots up, we have main process as PID: 1 and which contains multiple sub processes.
Once system booted, we could see many process which are running in a machine using ps –ef command.
Process ID is unique and two process ID can’t have same process ID.
The same approach is followed in inside container (child system). But for Linux docker Host machine which consider Child System (container) as one individual process and this can be linked with Docker Host machine processes.
Hence each process in inside container can be sync with underlying host machine processes. 


This cgroups can be managed resources for containers. But eventhough, we can restrict the cpu and memory to containers as limitation.


Container Orchestration