Category Archives: WebService

Tracing with LightStep and Nginx

Tracing your webservice/microservice helps in knowing the bottleneck of the component that is taking the most time to respond to. It also helps in pinpointing the location/log/function where the error occurs and what causes the poor performance.

In this blog post, we are trying to cover how we can leverage opentracing with Nginx while using Lighstep vendor for tracing our webapp.

OpenTracing Vendor-neutral APIs and instrumentation for distributed tracing.

Lightstep provides unlimited cardinality, dynamic service maps, and immediate (shockingly accurate) root cause correlation across traces, metrics, and logs anywhere in your system
Lightstep is just another vendor that provides connector for configuring distributed tracing based on opentracing standards.

We need to configure Nginx to use nginx-opentracing module and provide a vendor tracer , we are going to use LightStep vendor.

This GitGist should help you get started very easily.

Install.txt describes the steps to be followed for downloading and configuring the required lib.

Note these two lines –

load_module modules/ngx_http_opentracing_module.so;
Instructs Nginx to load the opentracing module

opentracing_load_tracer /usr/local/lib/liblightstep_tracer_plugin.so /etc/nginx/lightstep-config.json;
The config describes Lightstep vendor lib location and the vendor lib config

Note- In Step3 when you run the command
strings /lib64/libstdc++.so.6 | grep GLIBCXX
make sure you have available
GLIBCXX_3.4.25
GLIBCXX_3.4.26

Once you have configured all the above things you need to restart the Nginx and you should be able to see traces for all the requests your web app is serving.

This would look like

Deploying Python Rest Service with uWSGI

This article describes how we can leverage uWSGI one of the first implementations of WSGI (Web Server Gateway Interface, to forward requests to the web applications )for python flask application. I assume the reader has a fair understanding of flask as the article won’t detail how you can write your flask applications.

Rest applications in python are very easy to write, but the problem arises when we have to choose a production-grade application server for hosting our webservice. Flask one of the most popular framework for writing python based web applications is not a prod ready server. Moreover, the problem with python threading is well known, which we should always try to avoid. I recently had the same situation where we were required to deploy a python based webservice that handles millions of records in a second from users spread across the globe. After doing some research we choose uWSGI as our application server in conjunction with Nginx.

uWSGI supports launching multiple processes of your application. You just need to mention the number of workers that you want to launch. uWSGI can listen to HTTP port but as we are using Nginx as our Web Server we will start uWSGI to listen to a Unix socket. The client requests will be entertained from Nginx which will route the requests to the configured socket endpoint of uWSGI.

So, what are all the configurations that we need to set to get uWSGI working?

  • Need to tell the location for our python rest service, obviously
  • Which module to call, as an entry point
  • Location for the socket, where uWSGI will listen for routing requests to our rest service
  • Number of the process we want uWSGI to launch
  • UserId that uWSGI will use
  • Logfile paths for logging the application logs
  • The plugins that uWSGI should load for running our python application

Sample Config

[uwsgi]
module = app:app # the entry point
for-readline = /home/centos/pythonservice/env.txt # the env. variable to set which are being used by application
  env = %(_)
endfor = 
master = true # Should have a master process
processes = 16 # Number of processes you want to launch
plugins = python36, logfile # plugins that uWSGI should load for running our application
uid = centos # userID 
socket = /run/uwsgi/pythonservice.sock # Socket file, where Nginx will route the requests
chown-socket = centos:nginx # as we are running as a centos user, making it the owner of the socket file
chmod-socket = 666
vacuum = true # remove the socket file when the process stops
single-interpreter = true
reload-mercy = 30000 # how long uWSGI should wait before killing the workers when you restart your application
worker-reload-mercy = 30000
die-on-term = true
chdir = /home/centos/pythonservice/ # location of our codebase
logger = file:/var/log/pythonservice/app.log # path to log files
req-logger = file:/var/log/pythonservice/apprequest.log

Sample env.txt file

APP_CONFIG=/home/centos/pythonservice/config/prod.properties

The beauty of using uWSGI is you write your python application with flask without caring about how to handle requests at scale. You can just increase the number of processes running your application by a simple configuration change. UWSGI will detect all of the endpoints defined in the flask application and will route the requests accordingly.

The only thing that is left is configuring Ngnix to pass requests to the above-defined uWSGI socket. The bare minimum configuration would be

server {
    listen 80;
    server_name your_url;

    location /api/v1/health {
       include               uwsgi_params;
       uwsgi_pass            unix:/run/uwsgi/pythonservice.sock;
   }