<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Asif Chowdhury — DevOps &amp; Full-Stack Blog</title>
    <link>https://asifthewebguy.me/blog.html</link>
    <atom:link href="https://asifthewebguy.me/feed.xml" rel="self" type="application/rss+xml" />
    <description>Practical writing on Node.js, Docker, PostgreSQL, SaaS architecture, and full-stack development by Asif Chowdhury.</description>
    <language>en</language>
    <lastBuildDate>Thu, 30 Apr 2026 11:30:31 +0000</lastBuildDate>
    <generator>Asif's static CMS</generator>
    <item>
      <title>The Guard: Hardening Your Containers for Production</title>
      <link>https://asifthewebguy.me/posts/the-guard-hardening-your-containers-for-production.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/the-guard-hardening-your-containers-for-production.html</guid>
      <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Moving from development to production requires a shift in mindset. This final guide provides a security and reliability checklist to ensure your Docker containers are battle-ready.]]></description>
      <content:encoded><![CDATA[<p>We have spent this series building, shrinking, and orchestrating our application stacks. But before you open your home lab or professional project to the world, you need to put on your armor. Moving a container into a production environment is about more than just making it work—it is about making it secure, stable, and efficient.</p>
<p>Today, we wrap up our series with <strong>The Guard</strong>, a final checklist of best practices to harden your Docker environment.</p>
<h2>1. Security First: Trust No One</h2>
<p>Security in Docker starts at the image level. If your container is compromised, you want to ensure the damage is contained.</p>
<ul>
<li><strong>Run as Non-Root:</strong> By default, containers run as root. You should always configure your Dockerfile to use a non-privileged user to limit what an attacker can do if they gain access.</li>
<li><strong>Use Official Images:</strong> Whenever possible, start your Dockerfile with an official, verified image from Docker Hub.</li>
<li><strong>Scan for Vulnerabilities:</strong> Use tools to scan your images for known security holes before you deploy them.</li>
<li><strong>Keep Images Updated:</strong> Security patches are released constantly; regularly rebuilding your images ensures you have the latest fixes.</li>
</ul>
<h2>2. Resource Management: Don’t Let One Container Crash the Server</h2>
<p>In a production environment, you cannot allow a single container to go rogue and eat up all your host's memory or CPU.</p>
<ul>
<li><strong>Set Resource Limits:</strong> Always define maximum memory and CPU limits for your containers. This ensures that even if a service has a memory leak, it won't crash your entire Proxmox node or production server.</li>
<li><strong>Avoid the :latest Tag:</strong> Never use the <code>:latest</code> tag in production. Use specific version tags (like <code>node:18.1.0</code>) so you know exactly what code is running and can roll back easily if something breaks.</li>
</ul>
<h2>3. Reliability and Health</h2>
<p>Production systems need to be self-healing. If a service hangs, Docker needs to know how to handle it.</p>
<ul>
<li><strong>Implement Health Checks:</strong> Use health checks to let Docker monitor the actual status of your application, not just whether the process is running.</li>
<li><strong>Production Environment Variables:</strong> Ensure your <code>NODE_ENV</code> or equivalent variables are explicitly set to <code>production</code>. This often triggers optimizations in frameworks that improve performance and disable verbose debugging logs.</li>
<li><strong>Data Persistence:</strong> Use named volumes for your production data to ensure portability and easier backup management.</li>
</ul>
<h2>Conclusion: You Are Ready</h2>
<p>Docker has revolutionized how we develop, ship, and run applications. By understanding these core pillars—Architecture, Networking, Volumes, Multi-stage builds, and Orchestration—you are no longer just "running containers". You are building scalable, professional infrastructure.</p>
<p>Whether you are hosting a personal project in your home lab or managing a massive cluster for a client, these principles remain the same. </p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>Security</category>
      <category>DevOps</category>
      <category>Production</category>
      <category>SysAdmin</category>
      <category>Best Practices</category>
    </item>
    <item>
      <title>The Conductor: Orchestrating Multi-Container Apps with Docker Compose</title>
      <link>https://asifthewebguy.me/posts/the-conductor-orchestrating-multi-container-apps-with-docker-compose.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/the-conductor-orchestrating-multi-container-apps-with-docker-compose.html</guid>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Stop managing containers one by one. Learn how to use Docker Compose to define and run entire application stacks with a single YAML file.]]></description>
      <content:encoded><![CDATA[<p>Until now, we have been looking at containers as individual units. We fixed their plumbing, gave them memory, and put them on a diet. But in the real world, an application is rarely just one container. </p>
<p>A modern web app usually looks like this:</p>
<ul>
<li>A <strong>Frontend</strong> (React or Vue)</li>
<li>A <strong>Backend API</strong> (Node.js, Laravel, or Python)</li>
<li>A <strong>Database</strong> (PostgreSQL or MySQL)</li>
<li>A <strong>Cache</strong> (Redis)</li>
</ul>
<p>Starting these one by one with <code>docker run</code> is tedious and error prone. This is where <strong>Docker Compose</strong> steps in as your conductor.</p>
<h2>What is Docker Compose?</h2>
<p>Docker Compose is a tool that allows you to define and run multi-container applications. Instead of typing long commands in your terminal, you define your entire infrastructure in a single file called <code>docker-compose.yml</code>.</p>
<p>With one command, you can start every service your app needs, pre-configured to talk to each other.</p>
<h2>Breaking Down the YAML File</h2>
<p>The <code>docker-compose.yml</code> file is organized into three main sections:</p>
<ol>
<li><strong>Services:</strong> This is where you define your containers (the frontend, the backend, etc.).</li>
<li><strong>Networks:</strong> This automatically sets up the "plumbing" we discussed in post one so your services can communicate.</li>
<li><strong>Volumes:</strong> This handles the "memory" from post two so your database stays persistent.</li>
</ol>
<h3>A Full-Stack Example</h3>
<p>Here is a simplified look at how a typical stack is defined:</p>
<pre><code class="language-yaml">version: "3.8"
services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    networks:
      - app-network

  backend:
    build: ./backend
    environment:
      - DB_HOST=database
    networks:
      - app-network

  database:
    image: postgres:15
    volumes:
      - db-data:/var/lib/postgresql/data
    networks:
      - app-network

volumes:
  db-data:

networks:
  app-network:
    driver: bridge
</code></pre>
<h2>Essential Compose Commands</h2>
<p>Once your file is ready, these are the commands you will use every day:</p>
<ul>
<li><strong>Start everything:</strong> <code>docker-compose up -d</code> (The <code>-d</code> runs it in the background).</li>
<li><strong>Stop and remove everything:</strong> <code>docker-compose down</code>.</li>
<li><strong>View running services:</strong> <code>docker-compose ps</code>.</li>
<li><strong>View live logs:</strong> <code>docker-compose logs -f</code>.</li>
<li><strong>Run a command inside a service:</strong> <code>docker-compose exec backend npm run migrate</code>.</li>
</ul>
<h2>Why This is a Game Changer</h2>
<p>Using Compose means your entire environment is documented in your code. If a new developer joins your team, or if you want to move your app to a new server in your home lab, they don't need to ask you for the setup instructions. They just run <code>docker-compose up</code> and everything works.</p>
<p>In tools like Portainer, these files are often referred to as "Stacks." It is the most efficient way to manage complex applications without losing track of your configuration.</p>
<h2>Wrapping Up</h2>
<p>Docker Compose takes the manual labor out of container management. It ensures that your frontend, backend, and database always start in the right order with the right settings.</p>
<p>In our final post of this series, we will look at <strong>The Guard</strong>. We will cover the essential checklist for moving these containers out of development and into a secure production environment.</p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>Docker Compose</category>
      <category>DevOps</category>
      <category>Web Development</category>
      <category>HomeLab</category>
      <category>Backend</category>
      <category>Docker-Series</category>
    </item>
    <item>
      <title>The Diet: Shrinking Your Docker Images with Multi-Stage Builds</title>
      <link>https://asifthewebguy.me/posts/the-diet-shrinking-your-docker-images-with-multi-stage-builds.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/the-diet-shrinking-your-docker-images-with-multi-stage-builds.html</guid>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Large Docker images are slow to download and less secure. Learn how to use multi-stage builds to strip away the bloat and create lean, production-ready images.]]></description>
      <content:encoded><![CDATA[<p>In our previous posts, we fixed the plumbing and secured the memory. Now, it is time to look in the mirror. Is your Docker image too big? </p>
<p>When you first start building images, it is common to end up with files that are 900MB or larger. These heavy images take longer to upload to your registry, longer to pull onto your Proxmox server, and they often contain security vulnerabilities you do not need.</p>
<p>Today, we are putting our images on a diet using <strong>Multi-Stage Builds</strong>.</p>
<h2>The Problem: The Single-Stage Bloat</h2>
<p>Imagine you are building a React or Node.js application. To build the app, you need tools like <code>npm</code>, compilers, and source files. However, once the app is "built" into a production folder, you do not need <code>npm</code> or the source code anymore. You only need the final files and a tiny web server.</p>
<p>In a traditional single-stage Dockerfile, all those build tools stay inside the final image. This is like keeping the construction crane inside the house after you have finished building it.</p>
<h2>The Solution: Multi-Stage Builds</h2>
<p>Multi-stage builds allow you to use multiple <code>FROM</code> statements in one Dockerfile. You use one "stage" to build your app and a second "stage" to actually run it. </p>
<p>Here is how the logic works:</p>
<ol>
<li><strong>Stage 1 (The Builder):</strong> You use a full image with all the tools needed to compile your code.</li>
<li><strong>Stage 2 (The Production Image):</strong> You start with a tiny, slim image (like Alpine Linux). You copy <strong>only</strong> the finished files from the first stage and leave everything else behind.</li>
</ol>
<h3>A Practical Example (Node.js)</h3>
<pre><code class="language-dockerfile"># Stage 1: Build the app
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
# We only copy the 'dist' folder from the builder stage
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/app.js"]
</code></pre>
<h2>Why This Matters</h2>
<ul>
<li><strong>Smaller Size:</strong> An image can drop from 900MB to 50MB just by switching to a multi-stage build with an Alpine base.</li>
<li><strong>Better Security:</strong> Since the final image does not have compilers or package managers, there is a much smaller attack surface for hackers.</li>
<li><strong>Faster Deployments:</strong> In my home lab, pulling a 50MB image is nearly instant compared to waiting for a massive 1GB file.</li>
</ul>
<h2>Best Practices for a Lean Image</h2>
<ul>
<li><strong>Use .dockerignore:</strong> Just like <code>.gitignore</code>, this tells Docker to ignore files like <code>node_modules</code> or local logs during the build.</li>
<li><strong>Combine RUN Commands:</strong> Every <code>RUN</code> command creates a layer in your image. Combining them using <code>&amp;&amp;</code> helps keep the layer count low.</li>
<li><strong>Pick Official Images:</strong> Always try to use official images from Docker Hub to ensure they are updated and secure.</li>
</ul>
<h2>Wrapping Up</h2>
<p>A lean image is a fast image. By using multi-stage builds, you ensure that your production environment only contains exactly what it needs to run.</p>
<p>In the next post, we are going to look at <strong>The Conductor</strong>. We will move beyond single containers and learn how to use <strong>Docker Compose</strong> to run entire stacks with a single command.</p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>DevOps</category>
      <category>Web Development</category>
      <category>Optimization</category>
      <category>Security</category>
      <category>Docker-Series</category>
    </item>
    <item>
      <title>The Memory: Why Your Data Should Never Live in a Container</title>
      <link>https://asifthewebguy.me/posts/the-memory-why-your-data-should-never-live-in-a-container.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/the-memory-why-your-data-should-never-live-in-a-container.html</guid>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Losing data is every developer's nightmare. In this post, we learn how to use Docker Volumes and Bind Mounts to ensure your database stays intact, even if your container is deleted.]]></description>
      <content:encoded><![CDATA[<p>In our last post, we fixed the plumbing. We made sure our containers could talk to each other. But there is a bigger problem: containers are <strong>ephemeral</strong>. This is a fancy way of saying they are temporary. If you delete a container, everything inside it â like your database records or uploaded images â disappears forever.</p>
<p>To solve this, we need to move the data out of the container and onto the host machine. We have three main ways to do this.</p>
<h2>The Three Storage Options</h2>
<ol>
<li><strong>Bind Mounts:</strong> You map a specific path on your host machine (like <code>/home/asif/project</code>) to a path inside the container.</li>
<li><strong>Volumes:</strong> These are managed entirely by Docker. You do not need to worry about where they live on the host; Docker handles the directory structure for you.</li>
<li><strong>tmpfs Mounts:</strong> These live only in the host's memory. They are never written to the disk, making them perfect for sensitive data that should disappear when the container stops.</li>
</ol>
<h2>Bind Mounts vs. Volumes: Which One Should You Use?</h2>
<p>This is where most people get confused. Here is a simple breakdown:</p>
<table>
<thead>
<tr>
<th align="left">Feature</th>
<th align="left">Bind Mount</th>
<th align="left">Docker Volume</th>
</tr>
</thead>
<tbody><tr>
<td align="left"><strong>Location</strong></td>
<td align="left">You choose the host path.</td>
<td align="left">Docker-managed (<code>/var/lib/docker/volumes</code>).</td>
</tr>
<tr>
<td align="left"><strong>Syntax</strong></td>
<td align="left"><code>-v /host/path:/container/path</code></td>
<td align="left"><code>-v volume-name:/container/path</code></td>
</tr>
<tr>
<td align="left"><strong>Best Use Case</strong></td>
<td align="left">Development (Hot reloading).</td>
<td align="left">Production (Data persistence).</td>
</tr>
<tr>
<td align="left"><strong>Portability</strong></td>
<td align="left">Host-dependent.</td>
<td align="left">Portable across systems.</td>
</tr>
</tbody></table>
<h2>Practical Examples</h2>
<h3>For Development (Bind Mount)</h3>
<p>If you are working on a Node.js or Laravel project, you want the container to see your code changes immediately.<br><code>docker run -d -v $(pwd):/app my-app</code></p>
<h3>For Your Database (Named Volume)</h3>
<p>For something like PostgreSQL, you want Docker to manage the storage safely.<br><code>docker run -d -v db-data:/var/lib/postgresql/data postgres</code></p>
<h2>Pro-Tip for Home Lab Users</h2>
<p>Since I use <strong>Portainer</strong>, I prefer using <strong>Named Volumes</strong> for my stacks. It makes it much easier to back up the data and move it between different Proxmox virtual machines without worrying about hardcoded file paths on the host.</p>
<h2>Wrapping Up</h2>
<p>Managing storage correctly is the difference between a stable app and a total data loss disaster. Always remember: <strong>Keep your application in the container, but keep your data in a volume.</strong></p>
<p>In the next post, we are going to look at <strong>The Diet</strong>. I will show you how to use Multi-Stage builds to make your images smaller, faster, and more secure.</p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>Storage</category>
      <category>DevOps</category>
      <category>HomeLab</category>
      <category>Databases</category>
      <category>SysAdmin</category>
      <category>Docker-Series</category>
    </item>
    <item>
      <title>The Plumbing: How Docker Containers Talk to Each Other</title>
      <link>https://asifthewebguy.me/posts/the-plumbing-how-docker-containers-talk-to-each-other.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/the-plumbing-how-docker-containers-talk-to-each-other.html</guid>
      <pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Ever wondered how a container gets an IP address or why you cannot reach your database? We are diving into the world of Docker networking, from virtual ethernet cables to custom bridge networks.]]></description>
      <content:encoded><![CDATA[<p>In the last post, we talked about why Docker is essential for keeping your work consistent. Today, we are opening up the floorboards to look at the plumbing. In the world of Docker, plumbing is <strong>Networking</strong>.</p>
<p>When you run a container, it feels like it is on its own island. However, to be useful, it needs to talk to the internet, the host machine, or other containers. Here is how that actually happens.</p>
<h2>The Default Bridge (docker0)</h2>
<p>By default, Docker creates a virtual bridge on your Linux host called <code>docker0</code>. Think of this as a virtual network switch. </p>
<p>When you start a container, Docker gives it a virtual ethernet pair (called <code>veth</code>). One end of this "cable" stays in the container as <code>eth0</code>, and the other end plugs into the <code>docker0</code> bridge on your host. This is how the container gets its own IP address, usually something like <code>172.17.0.2</code>.</p>
<h2>Why the Default Bridge is Not Enough</h2>
<p>While the default bridge works, it has a major downside: <strong>It does not support automatic DNS resolution.</strong></p>
<p>If you have a "web" container and a "database" container on the default bridge, the web container cannot find the database by its name. You would have to use the specific IP address. Since container IPs change every time they restart, this is a nightmare to manage.</p>
<h2>The Solution: User-Defined Networks</h2>
<p>This is the "pro" way to do things in your home lab. You can create your own networks to get three main benefits:</p>
<ol>
<li><strong>Automatic DNS Resolution:</strong> Containers can talk to each other using their names (e.g., <code>mysql</code> or <code>api-server</code>) instead of shifting IP addresses.</li>
<li><strong>Better Isolation:</strong> You can keep your database on a private network and only expose your web server to the outside world.</li>
<li><strong>Dynamic Attachment:</strong> You can connect or disconnect containers from networks while they are still running.</li>
</ol>
<h2>Networking Cheat Sheet</h2>
<p>Here are the commands you will use most often to manage your plumbing:</p>
<ul>
<li><strong>Create a network:</strong> <code>docker network create my-network</code></li>
<li><strong>List all networks:</strong> <code>docker network ls</code></li>
<li><strong>Connect a running container:</strong> <code>docker network connect my-network my-container</code></li>
<li><strong>Disconnect a container:</strong> <code>docker network disconnect my-network my-container</code></li>
<li><strong>Remove a network:</strong> <code>docker network rm my-network</code></li>
</ul>
<h2>Wrapping Up</h2>
<p>Understanding the plumbing makes debugging much easier. If your app cannot connect to its database, the first thing you should check is if they are on the same network.</p>
<p>In the next post, we are going to talk about <strong>The Memory</strong>. We will look at Volumes and Storage to make sure your data does not disappear when a container stops.</p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>Networking</category>
      <category>DevOps</category>
      <category>HomeLab</category>
      <category>SysAdmin</category>
      <category>Containerization</category>
      <category>Self-Hosting</category>
      <category>Software Engineering</category>
      <category>Docker-Series</category>
    </item>
    <item>
      <title>Why Docker? Moving From &quot;It Works on My Machine&quot; to &quot;It Works Everywhere&quot;</title>
      <link>https://asifthewebguy.me/posts/why-docker-moving-from-it-works-on-my-machine-to-it-works-everywhere.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/why-docker-moving-from-it-works-on-my-machine-to-it-works-everywhere.html</guid>
      <pubDate>Sun, 12 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Stop struggling with "it works on my machine" errors. This new series breaks down how Docker keeps your development and production environments perfectly in sync. We are covering everything from low level networking and persistent storage to multi-stage builds and production security. Whether you are a beginner or looking to optimize your home lab setup, this guide has you covered.]]></description>
      <content:encoded><![CDATA[<p>If you have been in web development for more than a week, you have probably run into the classic problem. A project works perfectly on your local laptop, but the moment you try to move it to a server or share it with a teammate, everything breaks. Maybe the Node.js version is different, or a specific database driver is missing. </p>
<p>This is where <strong>Docker</strong> changed everything. </p>
<p>For me, Docker is the backbone of my home lab. Whether I am managing containers on <strong>Proxmox</strong> or using <strong>Portainer</strong> to visualize my stacks, Docker is what keeps my development environment consistent and my production deployments stable. </p>
<p>But Docker is more than just a buzzword. It is a toolset that, when used correctly, makes your life as a developer significantly easier. Over the next few weeks, I am going to break down exactly how Docker works, from the basic plumbing to high level orchestration.</p>
<h3>What to Expect in This Series</h3>
<p>We are going to go deep into the mechanics of containerization. Here is the roadmap for the upcoming posts:</p>
<ol>
<li><strong>The Plumbing (Networking):</strong> We will look under the hood at how containers actually talk to each other and the host machine.</li>
<li><strong>The Memory (Volumes &amp; Storage):</strong> I will show you how to ensure your data stays safe even if your container is deleted.</li>
<li><strong>The Diet (Multi-Stage Builds):</strong> We will learn how to shrink your image sizes so your deployments are fast and secure.</li>
<li><strong>The Conductor (Docker Compose):</strong> This is where we stop running single containers and start building full stack environments with one command.</li>
<li><strong>The Guard (Production Best Practices):</strong> A final checklist to make sure your containers are hardened and ready for the real world.</li>
</ol>
<p>Docker has revolutionized how we develop, ship, and run applications. By the end of this series, you will be equipped to containerize any application and deploy it consistently across any environment.</p>
<p><strong>Happy Dockerizing!</strong></p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>Docker</category>
      <category>DevOps</category>
      <category>HomeLab</category>
      <category>Web Development</category>
      <category>Containerization</category>
      <category>Self-Hosting</category>
      <category>Software Engineering</category>
      <category>Docker-Series</category>
    </item>
    <item>
      <title>My Childhood: From Old Radios to DevOps</title>
      <link>https://asifthewebguy.me/posts/my-childhood-from-old-radios-to-devops.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/my-childhood-from-old-radios-to-devops.html</guid>
      <pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Before I had a DevOps home lab or a Linux terminal, I had a screwdriver and a broken Black & White TV. Growing up in the alleys of Old Dhaka in the 90s, I did not see junk, I saw a mystery to solve. This is the story of how tracing circuit boards like city maps led me to a life of building servers and solving problems 'under the hood.']]></description>
      <content:encoded><![CDATA[<p>I grew up in Dhaka during the late 1980s and early 90s. Back then, life was a lot slower. Most kids played cricket in the streets or spent time roaming around the alleys. But I was always a bit different. I always wanted to know how things worked "under the hood."</p>
<p>Before I had a computer or a home lab, I played with old electronics. I looked for broken radios or old Black &amp; White CRT TVs. To most people, a broken TV was just trash. To me, it was a mystery to solve. I loved taking them apart to see what was inside.</p>
<p>I still remember the smell of dust and old metal when I opened a plastic case. The green circuit boards looked like maps of a tiny city. I would move my finger along the lines on the board. I wondered how a signal moved through the wires to show a picture or play sound. I did not know how to fix them yet, but I loved trying to understand the logic. I tried to fix them; most of the time I learned something new, and sometimes I succeeded. Basically, I never had a failure, because I always learned something.</p>
<p>Today, I have a DevOps home lab at my house. It has servers, Docker, and Proxmox. My toys are now virtual machines and code. The feeling is exactly the same when I get a new service to work. I feel the same joy I felt as a young boy in Dhaka. My childhood taught me how to solve problems. Whether it is an old TV or a modern server, I still love to discover how things work.</p>
<p>Looking back, those broken radios or CRT Tvs were just my first servers, and my homelab today is simply a bigger version of the mystery I've been solving since I was a boy in old Dhaka. My journey started with a screwdriver and a dream, and it comtinues today with a keyboard and a container.</p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>childhood dreams</category>
      <category>broken TVs and Radios</category>
      <category>HomeLab</category>
    </item>
    <item>
      <title>Building a Static Portfolio and CMS With Zero Backend</title>
      <link>https://asifthewebguy.me/posts/building-a-static-portfolio-and-cms-with-zero-backend.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/building-a-static-portfolio-and-cms-with-zero-backend.html</guid>
      <pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[How I built a fully static GitHub Pages portfolio, markdown blog, and in-browser CMS with no build step, no framework, and no server using vanilla JS and the GitHub API.]]></description>
      <content:encoded><![CDATA[<h1>Building a Static Portfolio and CMS With Zero Backend</h1>
<p>I've deployed a lot of things. Kubernetes clusters, Docker Swarms, managed databases, serverless functions. For my own portfolio, I wanted to do the opposite: deploy nothing.</p>
<p>The result is this site: a portfolio, blog, and content management system that runs entirely in the browser, with GitHub as both the host and the database.</p>
<p>Here's how it works.</p>
<h2>The constraints I set for myself</h2>
<p>Before writing a single line of code, I fixed the rules:</p>
<ul>
<li><strong>No build step.</strong> No webpack, vite, or bundler of any kind.</li>
<li><strong>No framework.</strong> Vanilla JS only.</li>
<li><strong>No backend.</strong> No server, no API, no database.</li>
<li><strong>Single-file pages.</strong> Each HTML file owns its own <code>&lt;style&gt;</code> and <code>&lt;script&gt;</code>.</li>
<li><strong>Three CDN libraries maximum:</strong> <code>marked.js</code> for markdown, <code>highlight.js</code> for code, <code>DOMPurify</code> for sanitisation.</li>
</ul>
<p>The goal was a site I could understand entirely, deploy for free, and edit from any browser without pulling in dependencies that would rot in six months.</p>
<h2>The architecture</h2>
<p>The stack is four HTML files, two JSON files, and a folder of markdown:</p>
<pre><code>index.html          â portfolio homepage
blog.html           â post listing with search + tag filters
post.html           â single post reader
admin.html          â in-browser CMS

data/config.json    â all portfolio content (source of truth)
data/posts-index.json â blog post metadata

posts/*.md          â blog posts with YAML front matter
</code></pre>
<p>GitHub Pages serves everything as static files. The browser does all the work.</p>
<h2>GitHub as a database</h2>
<p>The most interesting part of this setup is the CMS. It authenticates with a GitHub Personal Access Token (stored in <code>localStorage</code>, never logged, never sent anywhere except <code>api.github.com</code>) and uses the GitHub Contents API to read, write, and delete files directly in the repository.</p>
<p>Reading a file:</p>
<pre><code class="language-javascript">async function readFile(path) {
  const res = await fetch(`${API}/repos/${OWNER}/${REPO}/contents/${path}`, {
    headers: apiHeaders()
  });
  if (!res.ok) throw new Error(`Read failed: ${res.status}`);
  const { content, sha } = await res.json();
  return { content: atob(content.replace(/\n/g, '')), sha };
}
</code></pre>
<p>Writing a file:</p>
<pre><code class="language-javascript">async function writeFile(path, content, sha, message) {
  const body = {
    message,
    content: btoa(unescape(encodeURIComponent(content))),
    ...(sha &amp;&amp; { sha }),
  };
  const res = await fetch(`${API}/repos/${OWNER}/${REPO}/contents/${path}`, {
    method: 'PUT',
    headers: apiHeaders(),
    body: JSON.stringify(body),
  });
  if (!res.ok) throw new Error(`Write failed: ${res.status}`);
  return res.json();
}
</code></pre>
<p>Two things worth noting here:</p>
<p><strong>The SHA requirement.</strong> The GitHub API requires the current file SHA when updating an existing file. If you skip it, you get a 409 Conflict. The CMS caches SHAs in a <code>shaCache</code> object after every read and write, so subsequent saves don't fail.</p>
<p><strong>Unicode-safe base64.</strong> Plain <code>btoa()</code> breaks on non-ASCII characters. The pattern <code>btoa(unescape(encodeURIComponent(content)))</code> handles any unicode correctly.</p>
<h2>The CMS</h2>
<p>The admin panel has three tabs:</p>
<p><strong>Portfolio editor:</strong> collapsible sections for bio, skills (add/remove categories and items), projects (expandable blocks with tech tags, bullets, GitHub/live links), and DevOps entries. Saves to <code>data/config.json</code> via the GitHub API. The homepage reads this file and renders everything dynamically, no content is hardcoded in the HTML.</p>
<p><strong>Blog posts:</strong> a table of all posts pulled from <code>data/posts-index.json</code>. Each row has edit and delete actions. Delete shows a confirmation overlay, removes the <code>.md</code> file, and updates the index in the same operation.</p>
<p><strong>Post editor:</strong> a split-pane markdown editor with:</p>
<ul>
<li>A toolbar for common formatting (Bold, Italic, H2, H3, Link, Code, Code Block, List, Quote, HR)</li>
<li>Live preview with syntax highlighting, debounced 300ms</li>
<li>Front matter fields (title, date, excerpt, tags) with auto-slug generation</li>
<li>Draft autosave to <code>localStorage</code> every 30 seconds</li>
<li>An unsaved-changes warning on <code>beforeunload</code></li>
<li>Tab key inserts two spaces instead of moving focus</li>
</ul>
<p>Saving publishes the <code>.md</code> file and updates the index in sequence. If the index write fails after the post write succeeds, the index is stale but the post file is safe; the next save will retry with the correct SHA.</p>
<h2>The post reader</h2>
<p><code>post.html</code> fetches the markdown file directly as a static asset, parses the YAML front matter, and renders with marked.js + DOMPurify. Code blocks get highlight.js applied per-element and per-block copy buttons injected on hover.</p>
<p>The reading progress bar is a CSS <code>width</code> animation driven by the scroll position:</p>
<pre><code class="language-javascript">window.addEventListener('scroll', () =&gt; {
  const scrolled = window.scrollY;
  const total = document.body.scrollHeight - window.innerHeight;
  progressBar.style.width = `${Math.min(100, (scrolled / total) * 100)}%`;
});
</code></pre>
<p>The table of contents is built by scanning the rendered HTML for <code>h2</code> and <code>h3</code> elements, assigning deterministic IDs, and tracking the active heading with an IntersectionObserver.</p>
<h2>One gotcha: Jekyll</h2>
<p>GitHub Pages runs Jekyll by default, which intercepts <code>.md</code> files and tries to render them as HTML instead of serving them raw. The post reader fetches <code>posts/{slug}.md</code> directly; if Jekyll is active, that request 404s.</p>
<p>The fix is a single empty file at the repository root:</p>
<pre><code>.nojekyll
</code></pre>
<p>That's it. Jekyll disabled. <code>.md</code> files served as-is.</p>
<h2>The design system</h2>
<p>Every colour, every spacing decision, every font choice is driven by CSS custom properties. No colour is hardcoded anywhere in the HTML files. The palette:</p>
<pre><code class="language-css">--color-bg:        #0B1220
--color-surface:   #111827
--color-border:    #1E3A5F
--color-accent:    #2563EB
--color-gold:      #F59E0B
--color-text:      #F1F5F9
</code></pre>
<p>Fonts are DM Serif Display for headings, DM Sans for body text, and JetBrains Mono for code and slugs, all loaded from Google Fonts with <code>display=swap</code>.</p>
<h2>What I'd do differently</h2>
<p><strong>Caching.</strong> The site uses <code>sessionStorage</code> to cache <code>config.json</code> and <code>posts-index.json</code> on first load. This avoids redundant fetches but means content changes don't propagate until the cache is cleared. A smarter approach would be to version-stamp the cached data and invalidate it after a known write.</p>
<p><strong>Conflict handling.</strong> If two browser tabs edit the same file simultaneously, the second write will fail with a 409. The CMS surfaces the error but doesn't auto-resolve it. Good enough for a single-author site.</p>
<p><strong>No markdown in the portfolio editor.</strong> The bio field is plain text. Skills, project descriptions, and bullets are all plain strings. This was a deliberate simplification; the complexity of a rich-text or markdown editor in that tab wasn't worth it for fields that rarely change.</p>
<h2>The result</h2>
<p>A portfolio and blog that loads in under a second, costs nothing to run, requires no deployment pipeline, and can be edited from any browser with a PAT. Every post is a markdown file in a git repository. Every change is a commit.</p>
<p>The whole thing is about 160KB of source code: four HTML files, two JSON files, a handful of markdown posts. No node_modules. No lockfile. No build artefacts.</p>
<p>I think that's the right size for a personal site.</p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>javascript</category>
      <category>github</category>
      <category>cms</category>
      <category>portfolio</category>
      <category>meta</category>
    </item>
    <item>
      <title>Deploying Node.js Apps with Docker and Nginx on a VPS</title>
      <link>https://asifthewebguy.me/posts/deploying-nodejs-with-docker-nginx.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/deploying-nodejs-with-docker-nginx.html</guid>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[A practical, step-by-step guide to containerising a Node.js application, setting up Nginx as a reverse proxy with SSL, and deploying it to a VPS - the way I do it on every project.]]></description>
      <content:encoded><![CDATA[<h1>Deploying Node.js Apps with Docker and Nginx on a VPS</h1>
<p>This is the exact workflow I use on every project. No fancy orchestration, just Docker Compose, Nginx, and Let's Encrypt running on a plain Ubuntu VPS.</p>
<h2>Prerequisites</h2>
<ul>
<li>A VPS running Ubuntu 22.04 (I use Linode or DigitalOcean)</li>
<li>A domain pointed at your server's IP</li>
<li>Docker and Docker Compose installed</li>
</ul>
<h2>1. Containerise your app</h2>
<pre><code class="language-dockerfile">FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>Build and test locally:</p>
<pre><code class="language-bash">docker build -t myapp .
docker run -p 3000:3000 myapp
</code></pre>
<h2>2. Docker Compose setup</h2>
<pre><code class="language-yaml">version: '3.8'
services:
  app:
    image: myapp:latest
    restart: unless-stopped
    environment:
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
    networks:
      - web

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./certs:/etc/letsencrypt
    depends_on:
      - app
    networks:
      - web

networks:
  web:
</code></pre>
<h2>3. Nginx configuration</h2>
<pre><code class="language-nginx">server {
    listen 80;
    server_name yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://app:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
</code></pre>
<h2>4. SSL with Let's Encrypt</h2>
<pre><code class="language-bash">apt install certbot
certbot certonly --standalone -d yourdomain.com
</code></pre>
<p>Set up auto-renewal:</p>
<pre><code class="language-bash">crontab -e
# Add: 0 3 * * * certbot renew --quiet &amp;&amp; docker compose restart nginx
</code></pre>
<h2>5. Zero-downtime deploy script</h2>
<pre><code class="language-bash">#!/bin/bash
docker pull myapp:latest
docker compose up -d --no-deps app
echo "Deployed at $(date)"
</code></pre>
<p>That's it. Simple, reliable, and you own the whole stack.</p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>node.js</category>
      <category>docker</category>
      <category>nginx</category>
      <category>devops</category>
      <category>vps</category>
    </item>
    <item>
      <title>Hello World: Why I Built This Blog</title>
      <link>https://asifthewebguy.me/posts/hello-world.html</link>
      <guid isPermaLink="true">https://asifthewebguy.me/posts/hello-world.html</guid>
      <pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[A short note on why I decided to build my own blog from scratch instead of using Medium or Dev.to, and what I'm planning to write about.]]></description>
      <content:encoded><![CDATA[<p>Everyone starts somewhere. This is mine.</p>
<h2>Why not Medium or Dev.to?</h2>
<p>I've written on both. They're fine platforms, but I kept running into the same friction: I don't own the content, the URL changes when I move, and the reading experience is buried under popups asking me to sign up.</p>
<p>I wanted something different:</p>
<ul>
<li><strong>Full ownership.</strong> My words, my domain, my git history.</li>
<li><strong>Zero bloat.</strong> No tracking, no paywall prompts, no "upgrade to Medium Partner" banners.</li>
<li><strong>Built by me.</strong> I'm a developer. Building my own tools is how I learn what I actually believe.</li>
</ul>
<h2>What this blog is</h2>
<p>This is a technical blog, mostly. I'll write about what I'm building and what I'm learning: Node.js, Docker, PostgreSQL, Next.js, Nginx, VPS setups, SaaS architecture, and the occasional tool I've built that other people might find useful.</p>
<p>I won't write about things I haven't actually done. Every post here will be grounded in something I've shipped, debugged, or deployed in production.</p>
<h2>What this blog is not</h2>
<ul>
<li>It's not a growth hack.</li>
<li>It's not SEO content written to rank for keywords.</li>
<li>It won't have a newsletter popup. Ever.</li>
</ul>
<h2>How it's built</h2>
<p>The irony is that this blog is itself a project I'll probably write about. It's a fully static GitHub Pages site with no build step, no framework, no backend. Vanilla JS reads markdown files from the repo and renders them in the browser. A lightweight in-browser CMS handles editing via the GitHub API.</p>
<p>Simple. Owned. Fast.</p>
<hr>
<p>If something I write helps you ship something, that's enough. Welcome.</p>
]]></content:encoded>
      <dc:creator>Asif Chowdhury</dc:creator>
      <category>personal</category>
      <category>meta</category>
    </item>
  </channel>
</rss>
