Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the cgi bin needs DB access, every time the process starts it needs to open a connection. Having the code in memory, for example using fastcgi, is not only to avoid the startup time penalty; you can also have a DB connection pool or at least a persistent DB connection per thread.


Do it at scale and your database will be sad about the number of connections

At least that was the case when I did the "python is single threaded, let's run many of them" + "python is slow, let's run many of them" dance

At scale you end up using shared connection pools outside of python (like pgbouncer) and a lot of tuning to make it serve the load while not killing the database

Of course, then we reimplemented in a multithreaded somewhat performant language and it became dead simple again


That is why CGI eventually evolved into a model that would keep some of the stuff around between requests.


There were standard ways to handle that, such as hosting a separate daemon that acts effectively as your proxy. Using Unix sockets instead of TCP/IP makes connecting to it relatively cheap.


Use udp




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: