{"id":1892,"date":"2024-07-02T23:43:36","date_gmt":"2024-07-03T03:43:36","guid":{"rendered":"https:\/\/www.craigperler.com\/blog\/?p=1892"},"modified":"2024-07-03T00:01:21","modified_gmt":"2024-07-03T04:01:21","slug":"deploying-a-dockerized-django-application-to-production","status":"publish","type":"post","link":"https:\/\/www.craigperler.com\/blog\/2024\/07\/02\/deploying-a-dockerized-django-application-to-production\/","title":{"rendered":"Deploying a Dockerized Django Application to Production"},"content":{"rendered":"\n<p>As frustrating as it might be for a Python developer to figure out how to get Docker running locally, it&#8217;s even more so figuring out how to get it working remotely, in production, and available online. Often running in production requires an application server which is what runs the Python application and handles web requests, and a web server and reverse proxy which serves static files, handles SSL, and forwards requests from users to the application server. In this example, we&#8217;ll use Gunicorn  as the app server, and nginx as the web server\/reverse proxy.<\/p>\n\n\n\n<p>In addition to these new components that get usually get configured in code, we&#8217;ll also need to use a service that can make your code available online &#8211; something that provide resources that runs your code, and that allocates an IP address to your web server so it can be accessed online. For this example, we&#8217;re going to use DigitalOcean.<\/p>\n\n\n\n<h2 id=\"digitalocean\" class=\"wp-block-heading\"><a href=\"https:\/\/m.do.co\/c\/ddd8a35e2147\">DigitalOcean<\/a><\/h2>\n\n\n\n<p>The first thing we want to do is create a <em>droplet<\/em> on <a href=\"https:\/\/m.do.co\/c\/ddd8a35e2147\">Digital Ocean<\/a>. <\/p>\n\n\n\n<p>As a preface, in <a href=\"https:\/\/www.craigperler.com\/blog\/2016\/10\/21\/projectsherpa-a-startup-retrospective\/\" data-type=\"post\" data-id=\"1107\">my (distant) past<\/a>, I used to do this sort of stuff on AWS. At the time, AWS was still sorta new, and so there were limited components to play with, and thus limited-er ways to screw things up. Since then, the ecosystem has exploded, and so I found it much easier to use a more user-friendly approach, and opted with <a href=\"https:\/\/m.do.co\/c\/ddd8a35e2147\">DigitalOcean<\/a>.<\/p>\n\n\n\n<p>A <a href=\"https:\/\/www.digitalocean.com\/products\/droplets\">droplet<\/a> is a scalable virtual machine that runs on DigitalOcean\u2019s infrastructure. It provides the computational resources needed to run your application, including CPU, memory, and storage. When you create a droplet, you choose the operating system (e.g., Ubuntu), the size (amount of CPU and memory), and additional features like SSH keys for secure access.<\/p>\n\n\n\n<p>To create a droplet:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Log in to your DigitalOcean account (obviously <a href=\"https:\/\/m.do.co\/c\/ddd8a35e2147\">create<\/a> one first if you need to).<\/li>\n\n\n\n<li>Create a new project if you haven&#8217;t already.<\/li>\n\n\n\n<li>Click on &#8216;Create&#8217; and select &#8216;Droplets&#8217;.<\/li>\n\n\n\n<li>Choose the latest version of Ubuntu as your operating system.<\/li>\n\n\n\n<li>Select your desired plan, whatever is cheapest to start &#8211; a small web app doesn&#8217;t need much. And yes, this isn&#8217;t free, publishing things online will cost you money, albeit pennies for just small stuff for short period.<\/li>\n\n\n\n<li>Add your SSH key to the droplet. You can follow the instructions <a href=\"https:\/\/docs.digitalocean.com\/products\/droplets\/how-to\/add-ssh-keys\/\">here<\/a> for this step.<\/li>\n\n\n\n<li>Click &#8220;Create Droplet&#8221; and wait for the magic to happen (should just take a few seconds for the new VPS to start up).<\/li>\n<\/ol>\n\n\n\n<h2 id=\"connect-to-the-droplet\" class=\"wp-block-heading\">Connect to the Droplet<\/h2>\n\n\n\n<p>The next step is to&nbsp;SSH into your droplet, and start installing and deploying stuff. Connecting is a critical step, so it gets its own heading here. If you F&#8217;d up your SSH key, this won&#8217;t work. Grab the IP address from your new droplet, and let&#8217;s say for example it&#8217;s <em>aa.bb.cc.dd<\/em>, then run the following from a terminal (or command prompt) window:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nssh root@aa.bb.cc.dd\n<\/pre><\/div>\n\n\n<h2 id=\"install-docker-and-docker-compose\" class=\"wp-block-heading\">Install Docker and Docker Compose<\/h2>\n\n\n\n<p>Assuming you got onto the machine, you can now start installing all the pre-reqs.<\/p>\n\n\n\n<p>To set up <a href=\"https:\/\/docs.docker.com\/get-docker\/\">Docker<\/a> on an Ubuntu system, you need to run several commands. Here\u2019s a detailed explanation of each command:<\/p>\n\n\n\n<h3 id=\"update-the-package-list\" class=\"wp-block-heading\">Update the Package List<\/h3>\n\n\n\n<p>This command updates the list of available packages and their versions. It does not install or upgrade any packages but fetches the most recent information about the available packages from the repositories specified in <code>\/etc\/apt\/sources.list<\/code>. This ensures that you can install the latest versions of the packages.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt-get update\n<\/pre><\/div>\n\n\n<h3 id=\"install-required-packages\" class=\"wp-block-heading\">Install Required Packages<\/h3>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt-get install ca-certificates curl gnupg\n<\/pre><\/div>\n\n\n<h3 id=\"create-the-directory-for-the-docker-keyring\" class=\"wp-block-heading\">Create the Directory for the Docker Keyring<\/h3>\n\n\n\n<p>This command creates a directory (<code>\/etc\/apt\/keyrings<\/code>) where the Docker GPG key will be stored. The <code>install<\/code> command is used with the <code>-m 0755<\/code> option, which sets the permissions of the directory to <code>0755<\/code> (read and execute permissions for everyone, and write permission for the owner). The <code>-d<\/code> option indicates that a directory is being created.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo install -m 0755 -d \/etc\/apt\/keyrings\n<\/pre><\/div>\n\n\n<h3 id=\"download-and-add-the-docker-gpg-key\" class=\"wp-block-heading\">Download and Add the Docker GPG Key<\/h3>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncurl -fsSL https:\/\/download.docker.com\/linux\/ubuntu\/gpg | sudo gpg --dearmor -o \/etc\/apt\/keyrings\/docker.gpg\n<\/pre><\/div>\n\n\n<h3 id=\"set-the-correct-permissions-on-the-docker-gpg-key\" class=\"wp-block-heading\">Set the Correct Permissions on the Docker GPG Key<\/h3>\n\n\n\n<p>This command changes the permissions of the Docker GPG key file to make it readable by all users (<code>a+r<\/code>). This is necessary because the <code>apt<\/code> command needs to be able to read this key to verify the authenticity of the Docker packages during installation.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo chmod a+r \/etc\/apt\/keyrings\/docker.gpg\n<\/pre><\/div>\n\n\n<h3 id=\"add-the-docker-repository\" class=\"wp-block-heading\"><strong>Add the Docker Repository<\/strong><\/h3>\n\n\n\n<p>The following command adds the Docker repository to your Ubuntu system. This command combines several shell utilities to achieve this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\necho &quot;deb &#x5B;arch=$(dpkg --print-architecture) signed-by=\/etc\/apt\/keyrings\/docker.gpg] https:\/\/download.docker.com\/linux\/ubuntu $(lsb_release -cs) stable&quot; | sudo tee \/etc\/apt\/sources.list.d\/docker.list &gt; \/dev\/null\n<\/pre><\/div>\n\n\n<p>Let\u2019s break down this command:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>echo<\/code>: This command outputs the string to the terminal.<\/li>\n\n\n\n<li><code>deb<\/code>: This keyword indicates that the repository is a Debian archive.<\/li>\n\n\n\n<li><code>[arch=$(dpkg --print-architecture)]<\/code>: This specifies the architecture of your system (e.g., amd64). The $(dpkg &#8211;print-architecture) command dynamically inserts your system\u2019s architecture.<\/li>\n\n\n\n<li><code>signed-by=\/etc\/apt\/keyrings\/docker.gpg<\/code>: This option specifies the location of the GPG key used to verify the packages from this repository.<\/li>\n\n\n\n<li><code>https:\/\/download.docker.com\/linux\/ubuntu<\/code>: This is the URL of the Docker repository.<\/li>\n\n\n\n<li><code>$(lsb_release -cs)<\/code>: This command inserts the codename of your Ubuntu release (e.g., focal for Ubuntu 20.04). This ensures you get the appropriate packages for your version of Ubuntu.<\/li>\n\n\n\n<li><code>stable<\/code>: This indicates that you want to use the stable version of Docker packages.<\/li>\n\n\n\n<li><code>|<\/code>: This is a pipe, which passes the output of one command as input to another.<\/li>\n\n\n\n<li><code>sudo<\/code>: This runs the command with superuser privileges, which is necessary to write to system directories.<\/li>\n\n\n\n<li><code>tee \/etc\/apt\/sources.list.d\/docker.list<\/code>: This writes the output to a file named docker.list in the \/etc\/apt\/sources.list.d directory.<\/li>\n\n\n\n<li><code>&gt; \/dev\/null<\/code>: This discards the standard output, effectively silencing the tee command.<\/li>\n<\/ul>\n\n\n\n<p>After that, update the package list once more via <code>sudo apt-get update<\/code> to refresh the list of available packages and their versions from all configured repositories, including the newly added Docker repository. This ensures that you can install the latest Docker packages from the Docker repository.<\/p>\n\n\n\n<p>Lastly, you can then <a href=\"https:\/\/docs.docker.com\/compose\/install\/linux\/\">install<\/a>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Docker Community Edition (CE), which is the core Docker software that includes the Docker Engine and CLI. It allows you to run containerized applications.<\/li>\n\n\n\n<li>The Docker CLI (Command-Line Interface) client, which is used to interact with the Docker daemon (the background service that manages Docker containers). This package allows you to run Docker commands from the terminal.<\/li>\n\n\n\n<li>Containerd, which is an industry-standard container runtime that manages the complete container lifecycle of its host system. It is a core component of Docker, responsible for managing containers\u2019 execution and state.<\/li>\n\n\n\n<li>Docker Buildx which is a CLI plugin that extends the Docker command with advanced features for building Docker images, such as multi-architecture builds, cache import\/export, and more.<\/li>\n\n\n\n<li>Docker Compose, a tool for defining and running multi-container Docker applications. The docker-compose-plugin integrates Docker Compose functionality into the Docker CLI, allowing you to use docker compose commands to manage your multi-container applications.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\n<\/pre><\/div>\n\n\n<h2 id=\"verify-docker\" class=\"wp-block-heading\">Verify Docker<\/h2>\n\n\n\n<p>If Docker and Docker Compose are installed properly, you should be able to verify by running this command:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo docker run hello-world\n<\/pre><\/div>\n\n\n<figure class=\"wp-block-image size-full\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"1118\" height=\"374\" src=\"https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=1118%2C374&#038;ssl=1\" alt=\"\" class=\"wp-image-1901\" srcset=\"https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?w=1118&amp;ssl=1 1118w, https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=800%2C268&amp;ssl=1 800w, https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=120%2C40&amp;ssl=1 120w, https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=90%2C30&amp;ssl=1 90w, https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=320%2C107&amp;ssl=1 320w, https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-02-at-10.41.43%E2%80%AFPM.png?resize=560%2C187&amp;ssl=1 560w\" sizes=\"auto, (max-width: 1118px) 100vw, 1118px\" \/><\/figure>\n\n\n\n<h2 id=\"deploying-your-application\" class=\"wp-block-heading\">Deploying your Application<\/h2>\n\n\n\n<p>As a next step, you need to generate an SSH key and add it to your GitHub account, which tells your GitHub it&#8217;s ok for your VPS to pull down code.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nssh-keygen -t ed25519 -C &quot;your.email@gmail.com&quot;\ncat \/root\/.ssh\/id_ed25519.pub\n<\/pre><\/div>\n\n\n<p>That generates your key, but then you need to add it to GitHub. Head <a href=\"https:\/\/github.com\/settings\/keys\">here<\/a>, and then click New SSH Key and paste the details you just printed.<\/p>\n\n\n\n<p>And finally (sorta), pull down your code to the droplet from GitHub:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncd \/var\nmkdir www\ncd www\ngit clone git@github.com:yourusername\/yourrepository.git\ncd yourrepository\n<\/pre><\/div>\n\n\n<h2 id=\"a-note-on-configuration-files\" class=\"wp-block-heading\">A Note on Configuration Files<\/h2>\n\n\n\n<p>When deploying a Django web application using Docker, several key files are essential for setting up the environment. These include the Dockerfile, Docker Compose file, Django settings, and Nginx configuration. You can check this <a href=\"https:\/\/www.craigperler.com\/blog\/2024\/06\/05\/setup-django-docker-postgresql-react\/\" data-type=\"post\" data-id=\"1807\">blog post<\/a> for a bit of intro detail on the Docker stuff, which gets expanded here for the production components. Here&#8217;s a brief interlude on each.<\/p>\n\n\n\n<h3 id=\"dockerfile\" class=\"wp-block-heading\">Dockerfile<\/h3>\n\n\n\n<p>The Dockerfile is a script that contains a series of instructions on how to build a Docker image for your Django application. Here is a sample that works for me:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Specifies the base image. Using a slim version reduces the image size.\nFROM python:3.11\n\n# Environment Variables: ENV PYTHONDONTWRITEBYTECODE 1 and ENV PYTHONUNBUFFERED 1 ensure Python doesn\u2019t write .pyc files and the output is unbuffered.\nENV PYTHONDONTWRITEBYTECODE 1\nENV PYTHONUNBUFFERED 1\n\n# Sets the working directory inside the container.\nWORKDIR \/app\n\n# Install netcat-openbsd, a networking utility for reading from and writing to network connections using the TCP or UDP protocols. It\u2019s often used for debugging and network diagnostics.\nRUN apt-get update &amp;&amp; apt-get install -y netcat-openbsd &amp;&amp; rm -rf \/var\/lib\/apt\/lists\/*\n\n# Copy the rest of your Django application\nCOPY . \/app\n\n# Install Python dependencies\nRUN pip install pipenv gunicorn\nRUN pipenv --python \/usr\/local\/bin\/python3.11\nRUN pipenv install --system --deploy\n\n# Run collectstatic command to collect static files, including React build artifacts\n# Note: Django settings should be configured to include \/app\/static in STATICFILES_DIRS or directly as STATIC_ROOT\nRUN python manage.py collectstatic --noinput\n\n# Make port 8001 available to the world outside this container (you can use whatever port you want)\nEXPOSE 8001\n\n# Run Django server\n# Copy the entrypoint script\nCOPY entrypoint.sh \/entrypoint.sh\n\n# Make the entrypoint script executable\nRUN chmod +x \/entrypoint.sh\n\n# Set the entrypoint script to run when the container starts\nENTRYPOINT &#x5B;&quot;\/entrypoint.sh&quot;]\n<\/pre><\/div>\n\n\n<h3 id=\"docker-compose-file\" class=\"wp-block-heading\"><strong>Docker Compose File<\/strong><\/h3>\n\n\n\n<p>You can create a separate docker-compose file for each environment, and then &#8216;inherit&#8217; the configuration when running the actual commands. For sake of simplify, the following is a complete (no inheritance) example of a production file:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: xml; title: ; notranslate\" title=\"\">\nservices:\n  db:\n    image: postgres\n    environment:\n      POSTGRES_DB: ${POSTGRES_DB}\n      POSTGRES_USER: ${POSTGRES_USER}\n      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}\n    ports:\n      - &quot;5434:5432&quot;\n    volumes:\n      - postgres_data:\/var\/lib\/postgresql\/data\n\n  web:\n    build: .\n    command: gunicorn --bind 0.0.0.0:8001 config.wsgi:application\n    environment:\n      - DJANGO_SETTINGS_MODULE=config.settings.prod\n    expose:\n      - &quot;8001&quot;\n    volumes:\n      - .:\/app\n      - static_volume:\/app\/staticfiles\n    depends_on:\n      - db\n    env_file:\n      - .env\n\n  nginx:\n    build: .\/nginx\n    ports:\n      - &quot;80:80&quot;\n    depends_on:\n      - web\n    volumes:\n      - static_volume:\/static\n\nvolumes:\n  postgres_data:\n  static_volume:\n<\/pre><\/div>\n\n\n<p>I don&#8217;t want to tell you how long it took me to get this right, but suffice to say, figuring out the static file part was not easy &#8211; lots of trial and error and googling and ChatGPT to figure out what worked. Let me break this thing down.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Services<\/strong>: Defines three services: db (PostgreSQL), web (Django application), and nginx (web server).<\/li>\n\n\n\n<li><strong>Environment Variables<\/strong>: Used to configure the database service. You&#8217;ll need a <code>.env<\/code> file in the same directory that specifies these variables which get templated in here.<\/li>\n\n\n\n<li><strong>Build and Run Commands<\/strong>: The web service builds from the current directory and uses Gunicorn to serve the Django application. The nginx service builds from a referred Dockerfile in the <code>.\/nginx<\/code> folder (more below).<\/li>\n\n\n\n<li><strong>Dependencies<\/strong>: The nginx and web services depend on the db service.<\/li>\n\n\n\n<li><strong>Volumes<\/strong>: Persistent storage for PostgreSQL data and static files. This is what got me. The web container maps <code>\/app\/staticfiles<\/code> to a persistent volume. When you collect static files via Django, they should get dropped into this location. That same volume then gets mapped to the <code>\/static<\/code> folder in the nginx container. When you access static data from the web, you need to have files in this <code>\/static<\/code> folder to serve back, and the volume ensures consistency from the collected files in the web container to nginx. I know thats a mouthful.<br><br>While theoretically mapping <code>.<\/code> to <code>\/app<\/code> (in the web container) should include all subdirectories including static data, the practical aspects of Docker\u2019s volume management and container lifecycle necessitate using a named volume for reliable and consistent file sharing between services. This approach ensures that static files collected by the web service are always available to the nginx service, preventing the 404 errors you experienced.<\/li>\n<\/ul>\n\n\n\n<h3 id=\"django-settings\" class=\"wp-block-heading\"><strong>Django Settings<\/strong><\/h3>\n\n\n\n<p>The Django settings file (<code>base.py<\/code>) needs specific configurations for a production environment.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nimport os\n\nDEBUG = False\nALLOWED_HOSTS = &#x5B;'your_domain.com', 'your_server_ip']\n\nSTATIC_URL = '\/static\/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Debug Mode<\/strong>: DEBUG = False disables debug mode for production. You can of course leave as TRUE for extra help.<\/li>\n\n\n\n<li><strong>Allowed Hosts<\/strong>: ALLOWED_HOSTS specifies the domains\/IPs that can serve the application. You need to add your new DigitalOcean IP address here. Or, you can be slick by using this: <code>ALLOWED_HOSTS = os.getenv('ALLOWED_HOSTS').split(',')<\/code><\/li>\n\n\n\n<li><strong>Static Files<\/strong>: STATIC_URL and STATIC_ROOT configure the static files settings for serving via Nginx. Again, the static files must get dropped into the staticfiles folder, and URL requests for those files need to go to the \/static\/ URL part.<\/li>\n<\/ul>\n\n\n\n<p>With Django settings, you can create a base settings file, and then extend that by environment, such as production vs development. Simply import the base settings at the top.<\/p>\n\n\n\n<h3 id=\"nginx-dockerfile\" class=\"wp-block-heading\">Nginx Dockerfile<\/h3>\n\n\n\n<p>The Nginx Dockerfile sets up the Nginx web server with your custom configuration.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nFROM nginx:1.25\n\n# Remove default configuration\nRUN rm \/etc\/nginx\/conf.d\/default.conf\n\n# Copy custom configuration\nCOPY default.conf \/etc\/nginx\/conf.d\n<\/pre><\/div>\n\n\n<h3 id=\"nginx-configuration\" class=\"wp-block-heading\"><strong>Nginx Configuration<\/strong><\/h3>\n\n\n\n<p>The Nginx configuration file (<span class=\"s1\"><code>default.conf<\/code><\/span>) is used to reverse proxy requests to the Gunicorn server and serve static files.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nserver {\n    listen 80;\n\n    location \/ {\n        proxy_pass http:\/\/web:8001;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n    }\n\n    location \/static\/ {\n        alias \/static\/;\n    }\n}\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Listening Port<\/strong>: <code>listen 80;<\/code> specifies that Nginx listens on port 80.<\/li>\n\n\n\n<li><strong>Reverse Proxy<\/strong>: The <code>location \/<\/code> block proxies requests to the Gunicorn server running on <code>http:\/\/web:8001<\/code>. (Again, you can specify whatever port you want.)<\/li>\n\n\n\n<li><strong>Static Files<\/strong>: The <code>location \/static\/<\/code> block serves static files directly from the <code>\/static\/<\/code> directory.<\/li>\n\n\n\n<li>For those proxy&#8230; lines:\n<ul class=\"wp-block-list\">\n<li><strong>proxy_pass http:\/\/web:8001;<\/strong>: Forwards the request to the Gunicorn server.<\/li>\n\n\n\n<li><strong>proxy_set_header Host $host;<\/strong>: Ensures the Host header is preserved, which is crucial for virtual hosting.<\/li>\n\n\n\n<li><strong>proxy_set_header X-Real-IP $remote_addr;<\/strong>: Passes the client\u2019s IP address to the backend server.<\/li>\n\n\n\n<li><strong>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<\/strong>: Maintains a list of proxies through which the request has passed.<\/li>\n\n\n\n<li><strong>proxy_set_header X-Forwarded-Proto $scheme;<\/strong>: Tells the backend server whether the original request was HTTP or HTTPS.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h3 id=\"entrypoint\" class=\"wp-block-heading\">Entrypoint<\/h3>\n\n\n\n<p>The entrypoint script ensures that the database is ready before running Django migrations and starting the application. (Make sure to give execute permission to entrypoint.sh.)<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n#!\/bin\/sh\n\n# Wait for the database to be ready\necho &quot;Waiting for PostgreSQL to start...&quot;\nwhile ! nc -z db 5432; do\n  sleep 0.1\ndone\necho &quot;PostgreSQL started&quot;\n\n# Run Django migrations\necho &quot;Running migrations&quot;\npython manage.py migrate --noinput\n\n# Start the Django app (specified in the Dockerfile)\nexec &quot;$@&quot;\n<\/pre><\/div>\n\n\n<h3 id=\"config-conclusion\" class=\"wp-block-heading\">Config Conclusion<\/h3>\n\n\n\n<p>These configurations work together to deploy your Django application using Docker, Gunicorn, and Nginx. The Dockerfile sets up the application environment, the Docker Compose file defines the services, the Django settings configure the application for production, the Nginx configuration handles web traffic and static file serving, and the entrypoint script ensures the database is ready before starting the application.<\/p>\n\n\n\n<p>The folder and file structure for this should (could) look like the following (at least it works for me this way):<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nyour_project\/\n\u251c\u2500\u2500 app\/\n\u2502   \u251c\u2500\u2500 Dockerfile\n\u2502   \u251c\u2500\u2500 entrypoint.sh\n\u2502   \u251c\u2500\u2500 Pipfile\n\u2502   \u251c\u2500\u2500 Pipfile.lock\n\u2502   \u251c\u2500\u2500 manage.py\n\u2502   \u251c\u2500\u2500 config\/\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u251c\u2500\u2500 settings\/\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u2502   \u251c\u2500\u2500 base.py\n\u2502   \u2502   \u2502   \u2514\u2500\u2500 prod.py\n\u2502   \u2502   \u251c\u2500\u2500 urls.py\n\u2502   \u2502   \u2514\u2500\u2500 wsgi.py\n\u2502   \u251c\u2500\u2500 myapp\/\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u2502   \u251c\u2500\u2500 admin.py\n\u2502   \u2502   \u251c\u2500\u2500 apps.py\n\u2502   \u2502   \u251c\u2500\u2500 models.py\n\u2502   \u2502   \u2514\u2500\u2500 views.py\n\u2502   \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 nginx\/\n\u2502   \u251c\u2500\u2500 Dockerfile\n\u2502   \u2514\u2500\u2500 default.conf\n\u251c\u2500\u2500 docker-compose.prod.yml\n\u2514\u2500\u2500 .env\n<\/pre><\/div>\n\n\n<h2 id=\"deploy-to-digital-ocean\" class=\"wp-block-heading\"><strong>Deploy to Digital Ocean<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Set Up Environment Variables<\/strong>: Create a .env file in your project root with your environment variables.<\/li>\n\n\n\n<li><strong>Build and Start Docker Containers<\/strong>: this command runs <code>docker compose<\/code>, referencing your production settings file (if you&#8217;re using inheritence\/override with docker, you can stack these -f files), and then spins up a newly built container, in detached mode (in the background).<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo docker compose -f docker-compose.prod.yml up --build -d\n<\/pre><\/div>\n\n\n<h2 id=\"create-a-superuser\" class=\"wp-block-heading\">Create a Superuser<\/h2>\n\n\n\n<p>The entrypoint script here runs Django migrations, but if you&#8217;ve gone rogue and using your own, you&#8217;ll need to do that yourself. Otherwise, the last step (done once) will be to create a superuser for your new database:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo docker compose exec web python manage.py createsuperuser\n<\/pre><\/div>\n\n\n<p>This commands runs the <code>python manage.py createsuperuser<\/code> script within your Docker web container, and will then prompt you for the usual Django stuff to create an admin in your app.<\/p>\n\n\n\n<h2 id=\"conclusion\" class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>With these steps, you should have a fully functional Django web application running in production on Digital Ocean using Docker, Gunicorn, and Nginx. This setup ensures your application is scalable and easy to manage. <\/p>\n\n\n\n<p>Personally, getting this right offered an unexpected level of challenge and my repeated asks of ChatGPT left me even more frustrated as I tweaked settings and worked through things. I found <a href=\"https:\/\/testdriven.io\/blog\/dockerizing-django-with-postgres-gunicorn-and-nginx\/\">this article<\/a> by Michael Herman on <a href=\"https:\/\/testdriven.io\/\">testdriven.io<\/a> to be incredibly helpful as a reference. Hopefully someone (or perhaps just my future self) finds this all useful!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Discover how to efficiently deploy a Django web application using Docker, Gunicorn, and Nginx on DigitalOcean. Follow this comprehensive guide to set up your environment, configure your application, and ensure your web app is scalable and production-ready.<\/p>\n","protected":false},"author":1,"featured_media":1914,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[64,46],"tags":[],"powerkit_post_featured":[],"class_list":{"0":"post-1892","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-application-development","8":"category-projects"},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/www.craigperler.com\/blog\/wp-content\/uploads\/2024\/07\/Django_Deployment_Final_800x457.png?fit=800%2C457&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/p1SwZ6-uw","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/posts\/1892","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/comments?post=1892"}],"version-history":[{"count":5,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/posts\/1892\/revisions"}],"predecessor-version":[{"id":1918,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/posts\/1892\/revisions\/1918"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/media\/1914"}],"wp:attachment":[{"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/media?parent=1892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/categories?post=1892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/tags?post=1892"},{"taxonomy":"powerkit_post_featured","embeddable":true,"href":"https:\/\/www.craigperler.com\/blog\/wp-json\/wp\/v2\/powerkit_post_featured?post=1892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}