This post is part of the page requests to the metal series where we cover all the steps from a web request down to the electrons that move in the CPU, see the intro/overview page to see where this fits in.
The browser has sent the information from the clients browser over the network and made it to the web server. The portion of dealing with those requests is often categorized as the "back end" and we will run with that terminology to group the next few steps.
Handling the web request
Once the server has received the packets those get passed up the networking stack, and the packets with the correct TCP port will then get routed over to the web server. At a very basic level we can write code to handle the accepting of an incoming connection like this:
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serversocket.bind((socket.gethostname(), 80)) serversocket.listen(5)
Then we can create a main loop to handle accepting incoming connections:
while True: (clientsocket, address) = serversocket.accept() # now do something with the clientsocket
However doing this with CPython would not have scaled all that well in versions of Python that had heavy thread creation costs (we had to re-engineer a system that connected to IoT devices via TCP/IP that handled incoming connections in a naive way because it had problems scaling when they got more customers). For hosting websites we would do well to use specialized software that handles HTTP requests and handles them well, implementing your own code to efficiently handle many connections concurrently is a difficult task. Python as a language experienced a huge growth of adoption for implementing API's but yet had a model of execution that was problematic for these sorts of workloads due to issues with threading such as the GIL and heavy thread creation costs. As a result Python ended up introducing AsyncIO into the core language to allow people the use of an event loop for handling this sort of network programming in a way that was much lighter on resource demands. But before Python can do anything with an incoming request it has to make it into the Python interpreter/runtime.
In practice production HTTP servers are often running something like Apache or nGinx. In the case of NGINX have a look at this article that outlines how the internal architecture works: https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/
A quick overview of NGINX is:
- NGINX creates worker processes
- These worker processes listen for incoming connections
- When a connection is accepted the connection is assigned to a state machine that handles the flow of data for that connection
- When the state machine requires content to be generated it will forward to our Python code via uWSGI
Getting the web request into Python - WSGI
Given that in production we will be using a dedicated HTTP server that's not implemented in Python we need to get the request over to Python at some point.
The WSGI PEP explains the interface for connecting the web server to the Python world which in the case of Python backend software is often sitting behind a WSGI implementation such as uWSGI. It's the bridge between the server software and the Python code which processes the request and then forwards it on to the Django URL router to get dealt with.
By Janis Lesinskis
This post is part 4 of the "PageRequestsToTheMetal" series:
- Page requests to the metal - introduction
- Page requests to the metal - frontend
- Page requests to the metal - Network stack - From frontend to backend
- Page requests to the metal - Backend - What happens on the server *
- Page requests to the metal - Backend web framework
- Page requests to the metal - Backend implementation
- Page requests to the metal - hardware level