2021/12/13

Learning Flask 1: Bootstrap with flask

Background

This is a follow up blog post of previous NETCONF blog. Below diagram recalls the application I am going to build.

In the last blog post, I have built a server for running netconfd effectively. In this blog post, I will focus on building the CoreServer (Frontend layer) with flask.
By the way, the full source code of this application is here

Why flask?

I do not have much experiences on python flaks. Actually, I am more familiar with Django instead. Django, however, is not fit in this application since it is sort of overkill in my requirement. I need something small, and easy to code at the first place.
After some research from this website, I pick flask from the list. By the way, Django is classified as full-stack framework while flask is simply a micro framework in this site.

Start the application with a boilerplate

Nowadays, as long as you are using well-known frameworks / libraries etc, usually, there are multiple open sources boilerplates for your framework / libraries. Then, you can save much efforts on building the ground.
Personally, I have picked this boilerplate for my application. This boilerplate provides the docker image, the docker-compose file, the folder structure.

The usage of docker image + docker-compose

They are essential for me to build the application in a docker manner, and they save my time on building the CI/CD pipeline easily

The folder structure

It saves my time for figuring out how to organise files in flaks manner.

Setup CI / CD pipeline

I strongly recommend readers to setup a CI / CD pipeline asap especially on a new project. After having a CI / CD pipeline, code’s quality is kind of guaranteed. For example, coding style, some basic tests etc

  • Below is the yml file for running a CI / CD pipeline in gitlab CI.
# You can override the included template(s) by including variable overrides
# SAST customization: https://docs.gitlab.com/ee/user/application_security/sast/#customizing-the-sast-settings
# Secret Detection customization: https://docs.gitlab.com/ee/user/application_security/secret_detection/#customizing-settings
# Dependency Scanning customization: https://docs.gitlab.com/ee/user/application_security/dependency_scanning/#customizing-the-dependency-scanning-settings
# Note that environment variables can be set in several places
# See https://docs.gitlab.com/ee/ci/variables/#cicd-variable-precedence
include:
  - template: Security/SAST.gitlab-ci.yml

stages:
  - build
  - test

variables:
  PIPELINE_REGISTRY_DOCKER_IMAGE: "${CI_REGISTRY_IMAGE}:pipeline"
  PIPELINE_RUNTIME_DOCKER_IMAGE: "hands-on-core-server:latest"
  DOCKER_COMPOSE_CI_YML_FILE_NAME: "docker-compose-ci.yml"

buildDockerImage:
  image: docker
  services:
      - docker:dind
  stage: build
  variables:
    DOCKER_BUILDKIT: 0
  script:
      - apk add --no-cache docker-compose
      - docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" "$CI_REGISTRY"
      - docker-compose -f "docker/${DOCKER_COMPOSE_CI_YML_FILE_NAME}" build CoreServer
      - docker tag "${PIPELINE_RUNTIME_DOCKER_IMAGE}" "${PIPELINE_REGISTRY_DOCKER_IMAGE}"
      - docker push "${PIPELINE_REGISTRY_DOCKER_IMAGE}"

sast:
  stage: test

runTests:
  image: ${PIPELINE_REGISTRY_DOCKER_IMAGE}
  stage: test
  script:
      - flake8
  • The sast task is provided by gitlab. Just ignore it
  • Pipeline’s contents are trivial. Literally,
    • I compose a docker image which holding all my boilerplates, and dependencies.
    • I setup a stage runTests which runs with the docker image above, and run lint check with flake8 only at this moment
  • Below is the Dockerfile for ci
FROM tiangolo/uwsgi-nginx-flask:python3.8

WORKDIR /workspace
COPY . /workspace

RUN \
    pip install -r requirements-test.txt && \
    pip install -r requirements.txt

Conclusions

Now, I have a good ground to build up my application in flask. There is a CI-CD pipeline to make sure my codes will be in good qualities in some sense. There is a flask-style folder structure for me to organise my codes.
The next step of the development will be writing some trivial tests, and make sure the pipeline will run them on each commit. They will be covered in the next blog

Useful links

Quick try on NETCONF with docker, and docker-compose

Background

Recently, I have a task on building an application with interaction on NETCONF. IMO, I have no idea what is NETCONF before this task. But, I finally complete, partially, the task. This blog summarises what I have learnt on NETCONF, and bait some reading from you guys :D

The diagram illustrates 2 critical components in the application: CoreServer, and TargetHost. To summarise the application literally, I need to build a CoreServer such that users can monitor the device’s network status with NETCONF in RESTful manner.
In this blog post, I will focus on the TargetHost, the NETCONF part, only. Hopefully, I will write another blog on the CoreServer.

What is a NETCONF actually?

As the blog’s title, this is a quick try. I did not spent much time on studying what NETCONF actually is. Below is just my understanding on some quick research. Forgive me for being wrong since this part is not the critical part on my application. Most of the useful links have been captured at the last section. I recommend you to go thorough them if you really want to figure out NETCONF inside out.
AFAIK, NETCONF is yet another network management protocol which is similar to SNMP. It aims on configuring, and getting configurations from supported devices in a cross platform manner.
Back in to my application, since it is a protocol, I will need a server that understand NETCONF, and do the jobs accordingly. Secondly, I will need a client that sending commands in NETCONF manner in order to instruct the server to do things I want.

Why NETCONF with docker?

According to the Google search, NETCONF should be a common, and old protocol. It means that the quickest way for having an effective NETCONF server is having a commercial router. Since I don’t have that luxury for having 1 of them, I need to build 1 for myself. The problem is I don’t have any spare devices on my hand. So, I try to run that server in a docker container.
Following is the docker image I used

  • https://hub.docker.com/r/yuma123/netconfd

    Configuring the netconfd in the docker image

    My netconfd should be able to manipulate with network interfaces. So, I config, expressed in docker-compose.yml, as following
    version: 3
    service:
    TargetClient:
      networks:
        - hands-on-interview
      extra_hosts:
        - "host.docker.internal:host-gateway"
      image: yuma123/netconfd:2.12
      environment:
        NETCONF_USER: "handsonadmin"
        NETCONF_PASSWORD: "handsonpassword"
        NETCONF_PORT: "830"
        NETCONF_LOG_LEVEL: "info"
        NETCONF_MODULES: "ietf-interfaces"
      ports:
        - "127.0.0.1:56734:830"
    networks:
    hands-on-interview:
    
    This yml is edited from the application I have worked. I don’t go thorough it line by line. But, 2 parts are important: Environment, and ports
    Environments are the expected way for configuring the daemon while the port configurations allowing us to debug the daemon with another command line tool yangcli as discussed later on.

    List interfaces with yangcli

    After the netconfd docker container is up, we can listing network interfaces with yangcli. I tried both way: interactive, and one-off command.
    The tricky part is using --net=host docker feature. And, that’s why we need to setup the port mapping for the netcond docker container. Otherwise, we cannot connect to that server easily.

    Interactive way

    ```

    Run the yangcli under host network

    docker run -it —rm —net=host yuma123/yangcli:2.12
    yangcli>

Connect to the docker container in the localhost

yangcli> connect ncport=56734 server=127.0.0.1 user=handsonadmin password=handsonpassword

Test with show vars

yangcli> show vars

Show interfaces-state

yangcli> sget /interfaces-state

## One-off bash command

docker run -it —rm —net=host yuma123/yangcli:2.12 \
—ncport=56734 \
—server=127.0.0.1 \
—user=handsonadmin \
—password=handsonpassword \
—run-command=”sget /interfaces-state” \
—batch-mode \
—timeout=5 \
—display-mode=xml \
—log-level=info

* Expected results are listed below

```html
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
  <data>
    <interfaces-state xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
      <interface>
        <name>lo</name>
        <oper-status>unknown</oper-status>
        <statistics>
          <in-octets>0</in-octets>
          <in-unicast-pkts>0</in-unicast-pkts>
          <in-errors>0</in-errors>
          <in-discards>0</in-discards>
          <in-multicast-pkts>0</in-multicast-pkts>
          <out-octets>0</out-octets>
          <out-unicast-pkts>0</out-unicast-pkts>
          <out-errors>0</out-errors>
          <out-discards>0</out-discards>
        </statistics>
      </interface>
      <interface>
        <name>tunl0</name>
        <oper-status>down</oper-status>
        <statistics>
          <in-octets>0</in-octets>
          <in-unicast-pkts>0</in-unicast-pkts>
          <in-errors>0</in-errors>
          <in-discards>0</in-discards>
          <in-multicast-pkts>0</in-multicast-pkts>
          <out-octets>0</out-octets>
          <out-unicast-pkts>0</out-unicast-pkts>
          <out-errors>0</out-errors>
          <out-discards>0</out-discards>
        </statistics>
      </interface>
      <interface>
        <name>ip6tnl0</name>
        <oper-status>down</oper-status>
        <statistics>
          <in-octets>0</in-octets>
          <in-unicast-pkts>0</in-unicast-pkts>
          <in-errors>0</in-errors>
          <in-discards>0</in-discards>
          <in-multicast-pkts>0</in-multicast-pkts>
          <out-octets>0</out-octets>
          <out-unicast-pkts>0</out-unicast-pkts>
          <out-errors>0</out-errors>
          <out-discards>0</out-discards>
        </statistics>
      </interface>
      <interface>
        <name>eth0</name>
        <oper-status>up</oper-status>
        <statistics>
          <in-octets>111371</in-octets>
          <in-unicast-pkts>738</in-unicast-pkts>
          <in-errors>0</in-errors>
          <in-discards>0</in-discards>
          <in-multicast-pkts>0</in-multicast-pkts>
          <out-octets>481471</out-octets>
          <out-unicast-pkts>664</out-unicast-pkts>
          <out-errors>0</out-errors>
          <out-discards>0</out-discards>
        </statistics>
      </interface>
    </interfaces-state>
  </data>
</rpc-reply>

Conclusions

After making use of the docker image from netconfd, and yangcli, we now have a server running NETCONF effectively. The rest of my job is utilising ncclient, a python library, to talk with the server instead of using yangcli since that cli does not provide programmer friendly output.
Thanks for reading up to here. See you in the next blog. Before we end, I have captured the useful links during this study.

Useful links on working with NETCONF

How to use NETCONF with ncclient

Quick guide on understanding YANG + NETCONF with tool yuma

Links to the docker images related to NETCONF

2021/03/17

Walkthrough the basic of Python Channels

Goal of writing this blog

  • Figure out terminologies in Channels project
    • For example, Consumers, Channel layer
  • As an ASGI application, how Channels bridge up with existing Django codes
  • Expressed in code path for traffic in and out between Django and Daphne
  • What are the interactions between Channels layer, and Consumers

Terminologies

Consumers[1][2]

  • basic consumers - individual pieces that might handle chat messaging, or notifications - and tie them together with URL routing, protocol detection and other handy things to make a full application.
  • A consumer is the basic unit of Channels code. We call it a consumer as it consumes events, but you can think of it as its own tiny little application.

  • Consumers can be either long running / short running depends on the scope that consumer handling

  • a rich abstraction that allows you to make ASGI applications easily.

(Channels) Router[1][2]

  • A way to combine multiple consumers into 1 asgi application
  • Channels router works on scope level instead of event level
    • Distribute per scope instead of per event

Channel Layer[1][2]

  • channel layer, a low-level abstraction around a set of transports that allow you to send information between different processes.
  • A solution to communicate between different application instances

  • Each application should has an unique channel name

  • Allows both point-to-point, and broadcast messaging

Worker

  • A solution from project Channels, which running as a standalone process, on processing some naive background tasks. A worker will listen, and fire events from databases (Redis, Postgres etc) through Channel Layer
  • IMO, we can view this as kind of a celery worker

Channels and Django

From Daphne to Django

  • Invoke through … django asgi application definitnion …
  • channels.routing.ProtocolTypeRouter
  • channels.http.AsgiHandler
  • … Corresponding django view …

From Django to Daphne

  • … Corresponding Django view …
  • ….
  • channels.http.AsgiHandler.get_response
  • … Call send() which come from Daphne …

Ecosystem of Channels

2021/03/15

Walkthrough the basic of Django + ASGI + Daphne

Goal of writing this blog

  • Figure out terminologies in ASGI
    • Say, how to map Django & Daphne into ASGI terminologies
  • Figure out how Daphne interact with Django by using ASGI
    • Say, the code path for Daphne forward a http request to Django application, and the code path for reply
  • Figure out how Websocket implemented in Daphne
    • Understand how Daphne process websocket messages from Browser
    • Understand how Daphne process websocket messages from Django or other applications

      Terminologies in ASGI

      ASGI

      ASGI (Asynchronous Server Gateway Interface) is a spiritual successor to WSGI, intended to provide a standard interface between async-capable Python web servers, frameworks, and applications.

      WSGI

      WSGI provided a standard for synchronous Python apps

      WSGI applications

      WSGI applications are a single, synchronous callable that takes a request and returns a response; this doesn’t allow for long-lived connections, like you get with long-poll HTTP or WebSocket connections.

      ASGI applications[1][2]

  • ASGI application is structured as a single, asynchronous callable with 3 arguments: scope, send, and receive
    Below is an example of a simple asgi application.
    async def example_asgi_application(
      scope: dict,
      receive: callable[[], Awaitable[dict]],
      send: callable[[dict], Awaitable[None]]
    ):
      asgiEvent = await receive()
      ...
      await send(anotherAsgiEvent)
    
  • An application, which lives inside a protocol server, is called once per connection, and handles event messages as they happen, emitting its own event messages back when necessary.
  • Each call of the application callable maps to a single incoming “socket” or connection, and is expected to last the lifetime of that connection plus a little longer if there is cleanup to do.

    (ASGI) Events[1][2]

  • A simple python dict. You can put anything in this dict but it must contain a key type. This data structure is used on send(), and receive() in ASGI applications
    {'type': "some type", .....}
    
  • Events, which are messages sent to the application as things happen on the connection, and messages sent back by the application to be received by the server, including data to be transmitted to the client.

    (ASGI) Connection scope

  • A connection scope, which represents a protocol (asgi) connection to a user and survives until the connection closes.
  • Every connection by a user to an ASGI application results in a call of the application callable to handle that connection entirely. How long this lives, and the information that describes each specific connection, is called the connection scope.
  • scope must be a dict. The key scope["type"] will always be present, and can be used to work out which protocol is incoming. The key scope["asgi"] will also be present as a dictionary containing a scope["asgi"]["version"] key that corresponds to the ASGI version the server implements.

    (ASGI) protocol server

    A protocol server, which terminates sockets and translates them into connections and per-connection event messages.

    ASGI Connections

    The definition of a connection and its lifespan are dictated by the protocol specification in question. For example, with HTTP it is one request, whereas for a WebSocket it is a single WebSocket connection.

    Django / Daphne in ASGI terminologies

  • Django will be a kind of ASGI application
  • Daphne will be the protocol server

    Code path between Django, and Daphne

    Daphne main loop

  • daphne.server.Server.run

    From Daphne to Django

  • twisted.internet.endpoints.TCP4ServerEndpoint.listen
  • daphne.http_protocol.HTTPFactory.doStart
  • daphne.http_protocol.HTTPFactory.startFactory
  • twisted.internet.tcp.Port.doRead
  • daphne.http_protocol.HTTPFactory.buildProtocol
  • twisted.web.http.HTTPChannel.connectionMade
  • twisted.web.http.HTTPChannel.lineReceived
  • daphne.http_protocol.WebRequest
  • daphne.http_protocol.WebRequest.requestReceived
  • daphne.http_protocol.WebRequest.process
  • daphne.server.Server.create_application
  • … django application …

    From Django to Daphne

  • send() from django asgi application
  • daphne.server.Server.handle_reply
  • ….

    How Websocket works

    From Browser to Daphne

  • daphne.http_protocol.WebRequest.process
  • daphne.ws_protocol.WebSocketFactory.buildProtocol
  • daphne.ws_protocol.WebSocketProtocol
  • daphne.ws_protocol.WebSocketProtocol.applicationCreateWorked
  • … handled in asgi applications …

    From Other applications to Browser

  • … WS connection has been setup in above section …
  • … inputQueue from daphne.server.Server.create_application …
  • daphne.ws_protocol.WebSocketProtocol.handle_reply
  • … back to the server & back to the browser …