<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title><![CDATA[OhMyScript]]></title>
    <link>https://ohmyscript.com</link>
    <description><![CDATA[Tinkerer, FOSS enthusiast, systems engineer, polymathic indie computer scientist, and data-science aficionado. Explore my projects, read my writing, and connect with me to collaborate on interesting challenges.]]></description>
    <atom:link href="https://ohmyscript.com/feed.xml" rel="self" type="application/rss+xml" />
    <lastBuildDate>Fri, 10 Apr 2026 11:33:16 GMT</lastBuildDate>
      <item>
        <title><![CDATA[In defense of the wired earphones]]></title>
        <link>https://ohmyscript.com/musings/why-wired-over-wireless/</link>
        <guid>https://ohmyscript.com/musings/why-wired-over-wireless/</guid>
        <description><![CDATA[Rant on why I prefer wired earphones over wireless]]></description>
        <content:encoded><![CDATA[# In defense of the wired earphones

Hello everyone!!

This is about my return to using wired headphone and earphones. For years, wireless earphones/headphones have been a thing. I remember exactly 10 years back, untangling the wires of my LEGENDARY wired Samsung 3.5 Mm Jack, Samsung earphone.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h6uoede4dfgo3uqj5os.png"
  alt="Samsung 3.5 mm wired earphones with inline microphone"
  caption="LEGENDARRRRY 3.5 mm Jack MP3 Earphones With Mic For Samsung"
/>

> No earphone till date came close to it, in terms of quality of comfort, and sound.

Starting from early 2016, when the trend using wireless earphone/headset became a style statement. Beyond fashion statement, it was ease of not dealing with wire everytime. You didnt have to deal with wires tangled, which was a quantum problem of its own, haha!!

Amongst a lot, I also bought into that future early and stayed there for a long time. Well, I moved back to wired headphones, and to be honest, I dont miss switch as much as I thought I would.

Back when I used wired earphones, I hardly changed or switched between the earphones, probably must have switched max 3 earphones in 7 years, starting from Nokia's HS-47 Stereo Handsfree, retro over ear phones and then eventually samsung LEGENDARY and yes in between those MI wired earphones too.

On contarary with wirelesss earphone, experience have been very different. I have moved through multiple brands. I started with JBL headset > sony headset > boat > sennheiser > skullcandy > soundcore > nothing one.

Comfort has been a big factor in this constant switching. Some changes were for better comfort or some causing really bad headaches and stuff, definitely for various reasons.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0csaonroffh145xtr8x.png"
  alt="Chronograph watch with wired earphones"
  caption="My chrono with wired earphones"
/>

There's something profoundly satisfying about plugging in a cable and having it just work. No pairing, no connection drops, no battery anxiety. You plug it in, and voila play the music. Every single time.

Wireless earphones comes with a burden of charging, and for someone like me, who listens to music most of the time, that can be really hard to manage. Battery life has improved these days, stretching upto 48 hrs, which make it somewhat manageable. But I find it still extremely annoying to juggle between smart watch, earphones, and phone that it is charged when needed.

## The Battery Problem

Wired earphones don't need charging. They don't die on you. They don't have a degrading battery that turns your $100 purchase into e-waste after two years.

This isn't just about convenience, it's about sustainability. Every wireless earbud is a ticking time bomb of planned obsolescence, with batteries that can't be replaced and components too small to repair.

## Simplicity as a Feature

There's elegance in simplicity. Wired earphones have fewer points of failure. No firmware to update, no Bluetooth stack to troubleshoot, no compatibility issues across devices.

They're the Unix philosophy applied to audio hardware: **do one thing, and do it well**.

## Comfort fades over time

I have found that wireless to be not a comfrotbale to worn for a longer duration of time. For someone like me who works mostly with music on with laptop for 2-3 hrs in stretch, I have found myself removing the headset and wearing it again in between due to uncomfrot. Initially you find it good that you dont have a wired setup to hold on to, but eventually that fades and there numerous other problems that you causes more discomfort.

> I want freedom from the battery anxiety

## The Physical Connection

There's something honest about a cable. You know where your earphones are because they're physically connected to your device. No losing individual earbuds. No case that needs tracking. No "find my earbuds" features required.

The cable is both a tether and a feature.

## Against the Tide

Choosing wired in 2026 feels like a statement. Not because I'm against progress, but because I'm for intentionality. Wireless earphones solved a problem I didn't have and introduced several I didn't want.

> "Technology should adapt to humans, not the other way around."

## The Middle Ground

I'm not a purist. I own wireless headphones for specific use cases, lawn work, situations where freedom of movement matters. But for everything else? Give me the reliability, quality, and simplicity of a cable.

---

Sometimes, the old way isn't the wrong way. Sometimes, it's just the right way we forgot to appreciate. This isn't nostalgia for the sake of it. It's a reflection on usability, comfort, and the quiet reliability of something we left behind too quickly.

FYI, my current setup:

<table>
  <tr>
    <td>1</td>
    <td>EarPods (USB-C)</td>
    <td><a href="https://www.apple.com/in/shop/product/myqy3zm/a/earpods-usb-c">View</a></td>
  </tr>
  <tr>
    <td>2</td>
    <td>Nothing Ear</td>
    <td><a href="https://www.amazon.in/Nothing-Ear-Bluetooth-Headset-Wireless/dp/B0D2MW6L54/">View</a></td>
  </tr>
  <tr>
    <td>3</td>
    <td>soundcore H30i</td>
    <td><a href="https://www.amazon.in/soundcore-Headphones-Lightweight-Comfortable-Connectivity/dp/B0CD1FHLMH">View</a></td>
  </tr>
  <tr>
    <td>4</td>
    <td>Skullcandy JIB</td>
    <td><a href="https://www.amazon.in/Skullcandy-Jib-Ear-Mic-Black/dp/B07DGPDJ62/">View</a></td>
  </tr>
</table>

If you read this far, thank you for your time. How we listen is a small choice, but it is one of those everyday habits that quietly decides how much friction we put up with.

Whether you are team wired, team wireless, or happily on both, I hope this gave you something to think about. If you disagree or have a setup that works better for you, I would love to hear about it.

That's all for now. Thank you for reading.

Signing off until next time.]]></content:encoded>
        <pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Welcome to my newsletter]]></title>
        <link>https://ohmyscript.com/newsletter/welcome-to-my-newsletter/</link>
        <guid>https://ohmyscript.com/newsletter/welcome-to-my-newsletter/</guid>
        <description><![CDATA[What this newsletter is, who it is for, and how it fits alongside the rest of my writing.]]></description>
        <content:encoded><![CDATA[# Welcome to my newsletter

Hi there,

This space lists **newsletter issues** in one place. Some are **published here** as pages; others are pulled in automatically from my **[OhMyScript Publications](https://ohmyscript.beehiiv.com/)** feed on Beehiiv ([RSS](https://rss.beehiiv.com/feeds/WWDTjFj35Q.xml)). Issues that live on Beehiiv open in a new tab when you click them from Writing → Newsletter.

## What it is about

I write for people who build software and care about how it is shaped: engineering practice, tools, open source, and the occasional longer reflection that does not quite fit a short blog post. When something is better as a **letter** more personal, more contextual, or tied to a moment I will put it here.

## How it relates to the rest of my writing

- **Blogs** stay focused on tutorials, technical notes, and standalone articles.
- **Musings** stay loose and philosophical.
- **Newsletter** issues are where I can go a bit deeper on themes I am thinking about over time, share what I am reading or building, and speak more directly to you as a reader.

If you already follow my other writing, think of the newsletter as another lane on the same road not a duplicate, but a different format.

## Subscribing

If you want these in your inbox, use the subscribe control on the site (bell icon) so you do not have to watch the feed by hand. Whether you read here or by email, I am glad you are here.

Thanks for reading, and welcome.]]></content:encoded>
        <pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Lessons from PostgreSQL Incident Story at Nixopus]]></title>
        <link>https://ohmyscript.com/blogs/lessons-from-postgresql-incident-story/</link>
        <guid>https://ohmyscript.com/blogs/lessons-from-postgresql-incident-story/</guid>
        <description><![CDATA[Takeaways from discovering a bug that turned out to be a security breach]]></description>
        <content:encoded><![CDATA[# Lessons from PostgreSQL Incident Story at Nixopus

This article is about a bug, or at least initially surfaced as a bug, only to later reveal itself as a security breach in disguise.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14z84luptmlz4i41gf7w.png"
  alt="Thats a bug I havent heard of"
  caption="Thats a bug I havent heard of"
/>

I initially spent time to find the root cause, reproduce the bug and chasing the symptoms and container logs, where as the real problem was hiding in plain sight; a security vulnerability as a stability issue.

I am pretty sure, most of you might have spent hours debugging something, only to find that you were looking at the wrong problem entirely. If yes, this will definitely resonate.

To follow along, some basic context around <i>docker, docker-compose,</i> and <i>docker volumes</i> will help. Otherwise it would be like a goldfish reading a user manual for the ocean.

> That gold fish reading manual in ocean is absurd incongruity analogy

## How did it all start?

We began noticing the issue around the month of September/October 2025. This coincided with the period during which we had just completed integrating SuperTokens for Identity and Access Management. To give a glimpse of Nixopus architecture, SuperTokens was self hosted, connected to our existing postgreSQL database service, and everything was wired together using docker-compose to work smoothly.

> Fun fact: Here's the thing, the problem did not start in October, rather now that we look back through our git history, it has been there for around 5-6 months.

<Note>
  Currently, Nixopus is (and still is, dated 8 Jan. 2026) on alpha stage. This was in our test/development environment
</Note>

It must have been happened way back then too. Since database used to restart and our backend would reconnect and things appeared to recover on their own, hence did not really catch our attention.

Once the IAM responsibilities were moved to SuperTokens, the cracks started becoming visible. We started observing that every once in 1-3 days, logins would fail, and when inspecting SuperTokens was going down.

Initially started directly assuming that it was a SuperToken issue, eventually to find that, all containers had been running for X days, whereas database had only been running for fewer than X days, mostly when we debugged, few hours back.

This directed our focus to database being the reason for failure.

The symptoms by then were clear:

- PostgreSQL Database container restarted unexpectedly
- Impacted SuperTokens service as downstream effect
- As side effect to above 2 issues, service and IAM were unstable, making unable to use the application at all.

So far, we only knew symptoms, but the root cause was still a puzzle to be solved, unclear and undiscovered from all directions.

### Hypothesis 1

Earliest hypothesis was simple, maybe it is a volume permission issue. Since we were mounting the database directory from host file system, and PostgreSQL couldn't write to it on restart for some reason. As mentioned earlier, this was just a hunch, hence to verify, we started by moving away host mounted volume for docker database container.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eya3xol91xnok97ohsua.png"
  alt="Occam's Razor: The simplest explanation is usually the right one"
  caption="Sometimes the simplest explanation is the right one, but not this time"
/>

The fix was to keep name volume, leaving docker to take care of the storage and handle the persistence.

- **Date**: October 15, 2025
- **Commit**: `f8fd7964d`
- **PR**: [#507](https://github.com/raghavyuva/nixopus/pull/507)

As a result, by the end of a day or so, we observed that this issue of database container restarting still persisted.

### Hypothesis 2

[Raghav(Creator of Nixopus)](https://github.com/raghavyuva) started digging quite a lot in the GitHub issues, where such issues were raised or observed. We had found a couple of similar concerns, with similar symptoms and failures. This time, it was pointing to JIT compilation being memory intensive, and due to Out of Memory (OOM) issue.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41c7ytvng1xwpjhtu8s1.png"
  alt="Just In Time Compiler"
  caption="JIT.. Just in Time Compiler"
/>

Though, this was very unlikely to happen, made a quick fix to explicitly disable JIT compilation so that we can avoid any memory spike/OOM causing the container to fail.

- **Date**: October 25, 2025
- **Commit**: `b2c35bd29`
- **PR**: [#539](https://github.com/raghavyuva/nixopus/pull/539)

Our suspect was that PostgreSQL's Just In Time compilation is memory intensive(from what we understood from existing similar issues), it might cause the database to run out of memory and crash.

Sounds great and hopeful, doesn't it? Sadly this fix did not help us resolve the issue as well.

### Hypothesis 3

By December, we were really frustrated and annoyed. Every time we merged a fix PR, we thought it would resolve the problem, but symptoms persisted and there was no clear progress.

At this point, we had one wild hunch, though unlikely, we decided to dig a little deeper and verify the same.

So far, we had found that there were no specific failure messages on the container, None. That kind of got us thinking, this could be somewhere related to security or authentication.

To verify the same, we planned to support a feature where postgreSQL database could be an external source instead of self hosted via the docker-compose of the project.

This would help us figure out whether it is a symptom caused due to load by the application or something else as we guessed, related to security/hacking that is causing this. Given we had a base setup and exposing those default username and password on the codebase. This was very likely a possibility that default username/password was causing someone to get remote access to it.

## Culprit we caught

As part of the exercise, we added support for providing external database connection instead of self hosted database container.

This activity helped us isolate a lot of our assumption and confirm that this was related to security and authentication. Given 3-4 days, we saw no issues with setup and everything was running smoothly.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kyhyrb3s47alchtu4n6u.png"
  alt="Scooby Doooooo"
  caption="...no one would have ever suspected me, if not for you meddling kids"
/>

At this point when we started reviewing everything in detail, we found a major blunder that we missed out at a very initial stage.

It was the postgreSQL configuration that caused all the trouble.

```yaml
# the culprit
environment:
  - POSTGRES_HOST_AUTH_METHOD=trust # Anyone can connect without password
ports:
  - "${DB_PORT:-5432}:5432" # Exposed to the entire internet
```

The `trust` authentication method means PostgreSQL trusts anyone who connects. No password required. No authentication checks. Just... trust.

> And we had exposed the database port to the entire internet (`0.0.0.0:5432`), not just localhost.

This configuration is a critical security vulnerability. It was real world exploitation of a vulnerability that has been has been seen in attacks like the [Kinsing malware targeting Kubernetes environments](https://securityaffairs.com/140581/hacking/kinsing-malware-kubernetes-environments.html), where attackers actively scan for misconfigured PostgreSQL instances with trust authentication.

Github issues like [1](https://github.com/docker-library/postgres/issues/770#issuecomment-704460980) and [2](https://github.com/docker-library/postgres/issues/1054#issuecomment-1447091521) document similar issues, and real cases exact similar misconfigured postgreSQL instances were compromised for cryptocurrency mining.

## Post Mortem Analysis

You might be wondering, how does this actually happen? How does unauthorized access cause database restarts?

Let us understand it better with an analogy. Imagine your house is open, and strangers keep coming in. Assuming, too many of them are coming in, opening 100s or 1000s of connections, exhausting your `max_connections` limit, blocking legit valid connections, making DB slow and unresponsive.

The database becomes too overwhelmed by these and their health checks fail, eventually causing the service to restart.

## Patchwork

By now, we had a clear understanding about the issue, the solution was much clearer.

- **Date**: December 15, 2025
- **Commit**: `cb765ed14`
- **PR**: [#723](https://github.com/raghavyuva/nixopus/pull/723)

What did we have to do? Lock the doors, so as to ensure authorized access only.

Move away from `POSTGRES_HOST_AUTH_METHOD`, use `POSTGRES_INITDB_ARGS` to set authentication method during database initialization. This was a very crucial learning.

Ideally, moving from `trust` authentication to `SCRAM-SHA-256` based authentication, which was a much more secure approach. At this point, we did fix PostgresQL, and also secured rest of the services too.

After reviewing, we also secured Redis with password protection, exposing Caddy admin, SuperTokens for localhost to server only.

## Aftermath

When you find one security issue, it is more often a sign that others might exist too.

After implementing these changes, voila, there were no more unexpected crashes. This issue did not just pull our database security issue but also pointed out how vulnerable our whole setup was. This bug bash taught us that sometimes, the most challenging bugs are not bugs rather misconfigurations hiding in plain sight.

As developers we often say "it works on my machine", but in production, we need to ensure it works securely, reliably and correctly. This incident was our reminder to ourselves that security is not optional but fundamental.

---

I hope you enjoy smashing the bug alongside me.

Thats all for now.

Thank you. Stay tuned for more freshly brewed contents.

Happy learning!!]]></content:encoded>
        <pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[On Freelancing: My lessons & learnings]]></title>
        <link>https://ohmyscript.com/blogs/on-freelancing/</link>
        <guid>https://ohmyscript.com/blogs/on-freelancing/</guid>
        <description><![CDATA[personal reflection on freelancing, the freedom, the challenges, and the reality of building your own path.]]></description>
        <content:encoded><![CDATA[# On Freelancing: My lessons and learnings

Hi everyone!

<DropCap>This article is a reflection on my journey as a freelancer, the lessons learned, the challenges faced, and the reality behind the promise of freedom. If you are considering freelancing or already on this path, I hope that these insights help you navigate it better. </DropCap>

<Image
  src="/images/blog/freelancing_journey.png"
  alt="Freelancing journey"
  caption="The independent path: freedom comes with responsibility"
/>

## Why Freelancing?

I was drawn to freelancing for the same reasons, that most people are drawn to freelancing:

- Freedom from the daily commute
- Freedom to choose your projects
- Freedom to set your own hours
- Freedom from office politics

But here's what they don't tell you upfront:

> ## **Freelancing isn't freedom from work. It's freedom in work.**

The work is still work. The deadlines are still deadlines. The clients are still demanding. The difference is; they're yours; you choose which projects to take, which deadlines to accept, and which clients to work with.

<Image
  src="/images/blog/freelancer-whoami.png"
  alt="Freelancer identity"
  caption="A freelancer building their own path"
/>

## The First Project

I remember my first freelance project clearly. I was excited, nervous, and honestly, I was undercharging.

I took a product that was to migrate legacy product to modern Next.js Application. Current system whose UI and Backend was built on Django with Server Side Rendered. The client was a Venture Capitalist company. The application was a No Code tool for the companies they have funded or undertaken, to manage their portfolio and all other operations. The whole system was legacy, where all UI design and components were done using jQuery, Javascript embedded on Django templates.

The client seemed nice, the requirements seemed clear initially, and I thought I had everything figured out.

> ## I was wrong.

I quickly discovered significant technical caveats and communication challenges. There was no knowledge transfer, no API documentation, to be honest, let say, there was no documentation at all. It was like being handed a black box and expected to explore it and deliver results quickly.

To make matters worse, the team who built this product came from a business background with little to no engineering background as such. The entire codebase reflected this; it was a disaster.

None of these challenges were communicated during the initial discovery call. This was partly my fault; I was too excited to take on the work and get started, so I didn't ask the right questions upfront. Transitioning the existing jQuery based UI components to React and decoupling the backend from the frontend proved extremely difficult, as the legacy system didn't follow standard engineering practices.

<Image
  src="/images/blog/challenges-on-gig.png"
  alt="Challenges on gig"
  caption="Reality of freelance challenges: what they don't tell you upfront"
/>

I managed to migrate two modules and deliver them before parting ways with the client. I decided not to continue with the remaining 10-11 modules. The product was built over years from 2008-09 and had been maintained (or rather, patched) using the same approach ever since. The technical debt was overwhelming, and decoupling such a legacy system was more pain than I was willing to take on especially since the client couldn't understand the technical complexity involved in decoupling legacy system and time taken to it, given such large product. Perhaps I'll write more about this experience in a separate article.

### What I Learned

- **Negotiation is a skill**: Don't be afraid to discuss rates. Your time and expertise have value.

- **Clear communication saves time**: What seems obvious to you might not be obvious to the client. Always clarify requirements.

- **Contracts matter**: Even for small projects. Define scope, timeline, payment terms, and what happens if things change.

- **Saying no is powerful**: Not every project is worth taking. Some clients aren't worth the stress.

  **Example: Setting clear boundaries**

  **Included:**

  - Frontend development (React/Next.js)
  - API integration with existing backend
  - Responsive design implementation
  - Development testing and bug fixes

  **Excluded:**

  - DevOps and deployment setup (outside project scope)
  - Post-launch maintenance and support (available as separate service)
  - Backend development or API creation
  - Content creation or copywriting

  **Terms:**

  - Revisions: 2 rounds of feedback included
  - Timeline: 4 weeks from project start
  - Payment: 50% upfront, 50% on delivery
  - Additional features: Quoted separately and require scope change approval

## The Feast and Famine Cycle

One of the hardest lessons in freelancing is dealing with the feast and famine cycle.

**The Feast:** You are drowning in work. Multiple clients, tight deadlines, late nights. You are making good money, but you are exhausted.

**The Famine:** Radio silence. No new projects. No responses to proposals. You start questioning everything.

<Image
  src="/images/blog/feast-famine-freelancing.png"
  alt="Feast and famine cycle"
  caption="The reality of freelance income: prepare for both extremes"
/>

### How to Handle It

- **Save during the feast**: When work is abundant, save aggressively. Aim for at least 6 months of expenses.

- **Use downtime productively**: During slow periods, work on your portfolio, learn new skills, write blog posts, contribute to open source.

- **Build a pipeline**: Always have potential projects in the pipeline. Network, maintain relationships, stay visible.

- **Diversify income streams**: Don't rely on a single client or project type. Consider passive income, products, or retainer agreements.

## The Loneliness

Working alone sounds romantic until you are actually alone. No water cooler conversations. No team lunches. No one to bounce ideas off at 2 AM when you are stuck.

### Building Community

- **Join communities**: Online forums, Discord servers, local meetups. Find your tribe.
- **Find accountability partners**: Someone to check in with, share goals, and keep each other motivated.
- **Co-working spaces**: Even if it's just a few days a week, being around other people helps.
- **Mentorship**: Find mentors. Be a mentor. Both help combat isolation.

I've found that the best freelancers aren't lone wolves; they're part of a pack.

## Setting Boundaries

When work is everywhere, work is everywhere.

Your laptop is always open. Your phone is always buzzing. Your mind is always working.

### Protecting Your Time

- **Define work hours**: Even if you are flexible, have core hours when you are available.

- **Create a dedicated workspace**: Separate work from life, even if it's just a corner of your room.

- **Learn to say no**: Not just to bad projects, but to requests outside your scope or availability.

- **Take breaks**: Seriously. Burnout is real, and it is crazy harder to recover when you are on your own, trust me when I say this. This is coming from a very very personal experience.

  Lets say if you are **setting availability**

  - **Working hours**: 9 AM - 6 PM
  - **Response time**: Within 24 hours
  - **Emergency contact**: Only for critical issues
  - **Weekends**: Unavailable unless discussed

## Understanding Your Value

When you start freelancing, you'll likely say yes to everything.

Then you **learn to say no**. You learn that your value isn't in your availability, its in your expertise.

### Pricing Your Work

- **Don't compete on price**: Compete on value. Cheap clients are often the most demanding.

- **Know your numbers**: Calculate your actual costs; this includes taxes, health insurance, equipments, software, time off.

- **Value based pricing**: Sometimes hourly doesn't make sense. Charge based on the value you deliver.

- **Raise your rates**: Regularly. As you gain experience, your rates should reflect that.

> ## Your rate is not just about money. <br/> It is about the respect you have for your own time and expertise.

## The Reality Check

Freelancing is nott easier than a traditional job. It is different. You are still working. You still have deadlines. You still have responsibilities.

The difference is: **they are yours.**

You choose the projects. You choose the clients. You choose your path.

But with that choice comes responsibility.

### What You are Really Signing Up For

- **You are the CEO**: Strategy, planning, business development
- **You are the sales team**: Finding clients, proposals, negotiations
- **You are the accountant**: Invoicing, taxes, bookkeeping
- **You are the developer**: Actually doing the work
- **You are the support team**: Client communication, maintenance

It is a lot, but it is also empowering.

## The Trade offs

Every choice is a trade off.

<Image
  src="/images/blog/freelancer-freedom.png"
  alt="Freelancer freedom"
  caption="The trade-offs of freelancing: freedom comes with choices"
/>

- **Stability vs. Flexibility**: Traditional jobs offer stability. Freelancing offers flexibility. You rarely get both. At least, in the current stage that I am in, that is what I have experienced.

- **Security vs. Freedom**: Regular paychecks feel secure. But is it security or the illusion of security?

- **Comfort vs. Growth**: Staying in your comfort zone is easy. But growth happens outside it. There are plenty of opportunities where you will have to work outside your comfort zone, and that is something you cannot avoid. That is where the real fun lies, you get to learn constantly, gain new perspectives, and understand how things work beyond just engineering and code.

There is no right answer. Only the answer that fits you.

Some days you'll question your choice. Some days you'll celebrate it.

Both are valid.

## Setting your space & workflow

The most successful freelancers I know are not the most talented, rather they're the most organized. They are constant learners who adapt quickly to changing needs and continuously level up their skills.

### Workflows that worked for me

Here are some things that have become essential to my workflow, making daily planning and execution much smoother.

- **Project Management**: Track your projects, deadlines, and tasks. Use tools like AppFlowy, Trello, Notion, or a simple spreadsheet.

- **Time Tracking**: Know where your time goes. It helps with pricing and identifying time sinks.

- **Invoicing**: Automate it. Use tools that handle recurring invoices, reminders, and payment tracking.

- **Documentation**: Document your processes, common solutions, and client preferences. It saves time.

- **Contracts**: Have templates ready. Customize as needed, but start with solid foundations.

<Note>
  I use self-hosted open-source tools that have significantly improved my
  workflow. I plan to write about these tools in a separate article.
</Note>

## Learning to Communicate

Communication is everything in freelancing. You are not just building products, you are managing expectations, setting boundaries, and building relationships.

### Communication Best Practices

- **Over communicate**: Especially early in a project. Better to share too much than too little.

- **Set expectations**: Be clear about timelines, deliverables, and what is included (and explicitly what is not).

- **Document everything**: Meeting notes, decisions, changes. Written records protect everyone.

- **Be proactive**: Don't wait for problems. Check in regularly, share progress, flag issues early.

- **Know your audience**: Technical details for developers, business value for stakeholders.

## Freelancing: Usain Bolt or Mo Farah?

Freelancing isn't a sprint. It is a marathon.

You will have good months and bad months. Good clients and bad clients. Good projects and projects you would rather forget.

<Image
  src="/images/blog/freelancing-sprint-vs-marathon.jpg"
  alt="Sprint vs Marathon"
  caption="Marathon, not a sprint; Usain Bolt runs fast but Mo Farah runs far"
/>

But if you stick with it, you build something valuable:

- **A reputation**: Your work speaks for itself
- **A network**: Relationships that open doors to new opportunities
- **A portfolio**: Proof of what you can do
- **Independence**: The ability to choose your path

## Is Freelancing Right for You?

Not everyone should freelance. Not everyone shouldn't.

The question is not: **"Should I freelance?"**

Rather, the question is: **"What do I want?"**

And are you willing to pay the price for what you want?

### Signs It Might Be Right

- You are self motivated and disciplined
- You are comfortable with uncertainty
- You enjoy variety in your work
- You are good at managing your time
- You can handle the business side (sales, admin, customer relationship etc.)

### Signs It Might Not Be

- You need structure and routine
- Financial security is your top priority
- You prefer working in teams
- You do not want to handle business tasks
- You struggle with self discipline

There is no shame in either path. The goal is to find what works for you.

## Conclusion

Freelancing is not freedom from work; rather it is freedom in work.

**_Your projects. Your clients. Your time. Your life._**

And that's the point.

Not escaping work, but shaping it to fit your life.

It's not easy. It's not for everyone. But if it's for you, it's worth it.

<br />

---

<br />

I know this has been a longer post than expected. If you got this far, I appreciate
your patience and thank you for your time.

If you are considering freelancing or already on this path, I hope these insights help. And if you have your own experiences to share, I would love to hear them.

If you like the article, hit the like button, share the article and subscribe the blog. If you want me to write an article on a specific domain or technology, feel free to drop a mail at [hi [at] ohmyscript [dot] com](mailto:hi@ohmyscript.com)

Stay tuned for more.

That's all for now. Thank you for reading.

Signing off until next time.
Happy Learning.]]></content:encoded>
        <pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[On Simplicity]]></title>
        <link>https://ohmyscript.com/musings/on-simplicity/</link>
        <guid>https://ohmyscript.com/musings/on-simplicity/</guid>
        <description><![CDATA[Thoughts on the art of keeping things simple in a complex world.]]></description>
        <content:encoded><![CDATA[# On Simplicity

There's a quiet power in simplicity that we often overlook in our pursuit of sophistication. We tend to equate complexity with intelligence, yet the most profound ideas are often the simplest ones.

## The Paradox of Choice

In a world that celebrates abundance, we're drowning in options. More features, more choices, more everything. But what if less is actually more? What if the key to clarity isn't addition, but subtraction?

## Minimalism as a Practice

Minimalism isn't about deprivation. It's about intentionality. It's the practice of removing the unnecessary so the necessary can speak. Whether it's in code, design, or life itself, the principle remains the same: simplify until you can't simplify anymore.

## Finding Your Essential

The challenge isn't identifying what to add, but what to remove. Every element should earn its place. Every feature should justify its existence. The question isn't "Why not include this?" but "Why include this?"

> "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry

## Embracing Constraints

Constraints aren't limitations - they're frameworks for creativity. When you have infinite possibilities, you're paralyzed. When you have clear boundaries, you're free to create.

This is why some of the most elegant solutions emerge from the tightest constraints. Not despite them, but because of them.

---

The art of simplicity is the art of seeing what matters and having the courage to let go of everything else.]]></content:encoded>
        <pubDate>Wed, 10 Dec 2025 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Expedition of Async Programming in JavaScript]]></title>
        <link>https://ohmyscript.com/blogs/async-programming-javascript/</link>
        <guid>https://ohmyscript.com/blogs/async-programming-javascript/</guid>
        <description><![CDATA[A walkthrough of asynchronous programming in JavaScript: callbacks, promises and async/await.]]></description>
        <content:encoded><![CDATA[# Expedition of Async Programming in JavaScript

## My Background with Programming

This is a story of my expedition of understanding Asynchronous Programming in JS. Coming from a Computer Science background, I got various opportunities to explore multiple platforms, where I could learn different programming languages, from Assembly languages to High-level languages. However, I have never struggled to understand the construct or flow of the language.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j8479e37h8ruk4z89s78.png"
  alt="Programming journey"
  caption="Programming journey"
/>

## Understanding Synchronous JS

As far as I know, JS is one of those technologies that stands out from the rest that is in the industry. Far more, I am not speaking about performance, optimization, and things so on.

JS in Node is an asynchronous programming language.

This might be pretty confusing. While JavaScript runs on a single thread, Node.js introduced a powerful way to handle non-blocking operations. This means, even though only one operation can run on the main thread, time-consuming tasks like reading files or hitting APIs don’t halt the execution of other code.

To put it in simple words:

> JavaScript is a **single threaded** language, which means it has a one call stack and memory heap, that simply executes the code in the order it is written. It must finish executing a piece of code before it moves to the next.
>
> This is the synchronous behavior of JS.

Whoa! You might wonder, then, why the title **_Async Programming_**, and what on Earth does **_Async Programming have to do with JavaScript_**?

For this, we should thank Ryan Dahl, the mastermind and also criminal behind one of the most dominant, powerful runtime environments, Node.js.
Node.js was an environment that enabled Asynchronism in JavaScript.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ex1aqqwet1fy4b2ym7lk.png"
  alt="Node.js and asynchronous programming"
  caption="Node.js and asynchronous programming"
/>

## Asynchronism with JS

> Asynchronism in JavaScript, or now on, let’s say Node.js, runs in a single process, without creating another thread.
> This simply means that, unlike other languages like Java, Python, which block their thread for I/O ( What this means is, when we have to do an API call, accessing the database, or reading text files, so on), will pause the runtime until this process is completed.

In Node.js, it is contrary to the above situation. Node.js uses something referred to as **_Callbacks_**. The Node.js Event Loop orchestrates async tasks without blocking the main thread.

> Simple real life scenario, let us say,
> You are cooking and have to wash clothes as well. You put the clothes into the washing machine for a wash, and resume back to cooking. Once the washing is done, you can get back to it.
> Washing clothes did not stop you from cooking? Did it?
> The same principle applies here as well.

I suppose this gives you a basic ideology behind **_What_** and **_How_** of Async Programming, and now let us dive into something deeper, What and How of Async Programming in Node.js.

## Async Programming in Node.js

In Node.js, functions that perform asynchronous operations, on completing the execution of the asynchronous task, they return control and push onto a stack known to us as a callback.

#### Callback

This is how I initially understood that the way Node.js works using a callback.

For instance:

```js
function getUserById(userId, function(err, result) {
  if(err) { throw err; }
  console.log(result);
});

console.log('Hello World');

OUTPUT:
Hello World
{ id:1, name: Lee }
```

The above function runs, and once it's complete, the result is made available within the callback function.

In Node.js, for it to know that a function has some asynchronous operation, i.e., returns some data or throws some error, it points to a function that will be executed once the asynchronous operation is done.

This function, which deals with the returned value/throws an error, is called a Callback function.

Meanwhile, Node.js ensures it continues with the normal execution of the code, just as in the above example, where `console.log()` did not wait for the complete execution of the `getUserById()`.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwgxreoyuwvz7mo2wg5w.png"
  alt="Callback execution flow"
  caption="Callback execution flow"
/>

### Callback hell, the vexation

As time passed, I found it quite frustrating to deal with these Callback functions.

The first problem is variable scope, and the second is dealing with too many callbacks to access the results of the callback function, as we cannot return the resultant value from the callback.

```js
function getUserById(userId, callback) {
  // Simulate async behavior
  setTimeout(() => {
    const result = {
     id: 1,
     name: "Lee",
     userEmail: "iam@you.com"
    };
    callback(null, result);
  }, 1000);
}

getUserById(1, function (err, result) {
  if (err) throw err;
  console.log(result); // 3
});

console.log(result);     // 1
console.log("Hello World"); // 2

// 1,2,3 being order execution
OUTPUT:
undefined
Hello World
{ id:1, name: Lee, userEmail:iam@you.com}
```

The only way to access or do anything with the result was to operate inside the callback.

```js
async function getUserById(userId, callback) {
  // Simulate async DB fetch
  setTimeout(() => {
    const result = {
      id: 1,
      name: "Lee",
      userEmail: "iam@you.com",
    };
    callback(null, result);
  }, 1000);
}

async function sendEmail(email, callback) {
  // Simulate async email sending
  setTimeout(() => {
    callback(null, "Email sent successfully to " + email);
  }, 1000);
}

// Call the function
getUserById(1, function (err, result) {
  if (err) throw err;

  sendEmail(result.userEmail, function (err, emailResult) {
    if (err) throw err;
    console.log(emailResult); // "Email sent successfully to iam@you.com"
  });
});
```

This caused me a lot of confusion with naming a variable, accessing the variable. All in the flow of the code and function calls. This is where I realized I was dealing with something called **_CALLBACK HELL_** because of its confusing paradigm view.

**_Heartily thankful to StackOverflow._**

### Promises

As I struggled with nested callbacks, I stumbled upon a concept that served as a solution to my problem of Callback Hell; something called **_Promises_**.

A **_Promise_** is an object that encapsulates an asynchronous operation and on completion of the execution. It is a proxy for a value not known when the asynchronous code starts executing.

In simpler terms, it's a placeholder for a value that will be available in the future, once the async task is done.

Sounds very similar to Callback?
It is quite similar to Callback design but was less sophisticated in terms of readability of the code, variable scope, and variable hoisting.

Instead of giving a callback function to deal with the asynchronous function, the Promises API provides its own methods called “.then” and “.catch”, which basically execute based on functional chaining, as given below.
The same example, in terms of promises;

```js

async function getUserById(userId) {
  return new Promise((resolve) => {
    setTimeout(() => {
      const result = {
        id: 1,
        name: "someone",
        userEmail: "someone@everyone@email.com"
      };
      resolve(result);
    }, 1000);
  });
}

async function sendEmail(email) {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve(`Email sent successfully to ${email}`);
    }, 1000);
  });
}

getUserById(1)
  .then(function (result) {
    console.log(result);
    return result;
  })
  .then(function (userData) {
    console.log('Email to be sent');
    // return promise
    return sendEmail(userData.userEmail);
  })
  .then(function (emailStatus) {
    // waits until sendEmail is resolved
    console.log(emailStatus);
  })
  .catch(function (err) {
    console.error('Error:', err);
  });


OUTPUT:
Hello World
undefined
{
 id:1,
 name:someone,
 userEmail:someone@everyone@email.com
}
Email to be sent
Email sent
```

This looks basically more organized than the way Callback looked and the way the code structure seemed in the callback functions.

This paradigm, too, had issues. This dealt with firstly what I call **Promise Hell** when you need to keep passing the resultant to many 'then' functions.

> _Another major setback was the scope again. Data flow can be passed/ returned from one then() to another then(), but they were not available outside for processing._

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3uh0losid7iwsrwdhiv.png"
  alt="Promise Hell illustration"
  caption="Promise Hell illustration"
/>

This became a very major issue when I had to deal with several complex API's and that's when again I came across the solution, again in the StackOverflow, which was something called **_Async/Await_**.

### Async/Await

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/457s0isk9fdolcri1h8e.png"
  alt="Async/Await introduction"
  caption="Async/Await introduction"
/>

This was a step forward for me, to write several logics simplified and sorted.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zef9v7hlz46vkj7aj2id.png"
  alt="JavaScript callback and promise joke"
  caption="When a JavaScript date has gone bad, 'Don't call me, I'll callback you. I promise!'"
/>

It turned out to be the ultimate solution for several problems I had been facing during the development.

Async/Await is one best way to make asynchronous Node.js operate more imperatively.

This was one such pattern, where asynchronous operations were handled much more simplified way.
It brings in two keywords into the picture;
"**async**" and "**await**", where async is for declaring a function that will be dealing with some asynchronous operations and await is used to declare inside the "**async**" function that it shall "**await**" for the result of the asynchronous operation.

Simple Example with the above code:

```js

async function someFunction(userId) {
  try {
    let user = await getUserById(userId);
    console.log(user);
    console.log('Email to be sent');
    await sendEmail(user.userEmail);
    console.log('Email sent');
    console.log('Hello World');
  } catch (error) {
    throw error;
  }
}

OUTPUT:
{
 id:1,
 name:someone,
 userEmail:someone@everyone@email.com
}
Email to be sent
Email sent
Hello World
```

In the above instance, isn't the code more readable? The code is neatly structured and also makes sense with the flow and how it executes. Async/Await makes the code look more like a synchronous code format.

One major thing to be considered in async/await way of handling asynchronous operation is that a function call can **await** only its "_awaitable_".

Awaitable, meaning a function that returns a Promise Object or if its function carrying some asynchronous operation.

This was my journey of learning Asynchronous programming with Node.js.

---

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgo50pg07zvmjiabhwza.png"
  alt="Async programming summary"
  caption="Async programming summary"
/>

## Conclusion

From my point of view, I believe Callbacks are one of the worst ways to code and deal with asynchronous operations, given that at any point in time. But the Promise method of dealing with asynchronous programming overcomes with Callback Hell issue. But definitely see that the Promises methodology of dealing with async operations has its own perks in different cases. Async/Await might seem the best solution to be as of now. But don't limit yourself to these. Explore and use each of them based on the optimal use cases that you are dealing with.

Thanks for reading. Signing off, until next time.

Happy Learning.]]></content:encoded>
        <pubDate>Fri, 18 Jul 2025 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Inside Nixopus: Managing Database Migrations]]></title>
        <link>https://ohmyscript.com/blogs/inside-nixopus-managing-database-migrations/</link>
        <guid>https://ohmyscript.com/blogs/inside-nixopus-managing-database-migrations/</guid>
        <description><![CDATA[How Nixopus handles database schema changes with an automatic, domain-driven migration system using Bun ORM and PostgreSQL.]]></description>
        <content:encoded><![CDATA[# Inside Nixopus: Managing Database Migrations

This article mainly focuses on laying out our foundational approach to setting up the [database migration system for Nixopus](https://github.com/raghavyuva/nixopus/).

**_Database migrations_** are often the unsung heroes of software application development. They work silently in the background, ensuring that your database schema evolves safely alongside your application code. Yet for many teams, this is a major source of stress and uncertainty. In this blog, we will explore how we have set up the migration framework for Nixopus. This has become a pivotal step in our developmental and self-host workflow.

## Understanding Database Migration

To start with, let us first explore and understand what we mean by **_database migration_**. As we always do, let us take an analogy.

We often use different database systems like MySQL, PostgreSQL, MongoDB, etc, for data persistence. These data are stored in the form of Tables having columns and rows.

As the application grows, the number of tables grows, and the data defined at each table might vary and change; hence, it is very important to keep track of these changes.

Tables are at the database system level, whereas at the programmatic level, we maintain them with something called **_schemas_**. Like a house blueprint that dictates where rooms, doors, and wiring go, a schema simply defines how the data is organized and connected.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/humm2q3dkpxw5zv5180m.png"
  alt="Schema vs tables"
  caption="Schema vs tables"
/>

Now, let's imagine that you decide to change the blueprint mid-build, to add a new room or window, you need to carefully make changes to the existing blueprint such that you don’t end up collapsing the walls down.

This problem is addressed at the schema level to ensure the changes in the existing tables, which may be in the form of adding new columns, altering the type of existing columns, or adding new tables, are handled through what we call **_Database Migrations_**

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9pm6demw0q72mcmykqc.png"
  alt="Data migration meme"
  caption="Data migration meme"
/>

When we started building Nixopus, we quickly realized that database schema management would be critical to our success in self-hosting and local development setups. We planned with an expectation set that eventually we would see multiple developers working on different features, frequent deployments, and the need to support both development and production environments, which would require a very streamlined process to handle database migration.

We had aligned on the following factors that our migration system would have to adhere to:

- **Reliability**: Migrations must execute consistently across all envs
- **Automatic**: No manual intervention should be required during deployments
- **Forward & Backward Compatible**: Ensure easy rollback scenarios
- **Separations of Concerns**: Organization of migrations by domain or feature

Many existing tools either lacked the flexibility we needed or came with unnecessary complexity, be it in terms of testability or adding rollbacks in case of errors, etc.

Hence, we decided to build our migration system tailored to our needs and requirements using a tool called [Bun ORM](https://bun.uptrace.dev/guide/migrations.html).

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icreopncr72qkqyyq2se.png"
  alt="SQL vs ORM"
  caption="SQL vs ORM"
/>

## Do you know ORM?

To those who are not aware of the terminology ORM, let me give a glimpse of it; ORM, otherwise called as **_Object Relational Mapper_**, is like a waiter at a restaurant, to who you tell what you want, he/she goes to kitchen(I mean database), then brings back your order(data) as perfectly plated objects in ready to use form.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/605ksjoeg5exjzcot4vv.png"
  alt="Who ORM? Who ORM? Bun ORM"
  caption="Who ORM? Who ORM? Bun ORM"
/>

Without ORM, you would have been presented with raw vegetables and ingredients straight from the kitchen (of course, by that I mean the database), and you would have to do all the cooking yourself, which means writing and managing every query and data transformation.

Coming back, we have built our migration system around a simple yet **powerful concept**, where a pair of SQL files represents every database change:

1. **Applying the change (up migrations)**
2. **Rolling it back (down migrations)**

## Migration Life cycle

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nu808jb4jnzs8vl5c5ko.png"
  alt="Migration code structure flow"
  caption="Migration code structure flow"
/>

This approach has helped our system to easily take care of schema evolution and database versioning.

The migration system follows a defined life cycle:

1. **Discovery**: The system scans the `api/migrations` directory

2. **Parsing**: Migration files are parsed and paired (up/down migrations)

3. **Ordering**: Migrations are sorted by their numeric IDs

4. **State Check**: The system compares file system migrations with applied migrations in the database

5. **Execution**: Pending migrations are executed in transactions

6. **Recordings**: Successfully applied migrations are recorded in the migration table

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ej1w31t3m718xo63qfyo.png"
  alt="Migration system lifecycle"
  caption="Migration system lifecycle"
/>

Now that you know an overview of our setup and how it works, you might wonder what actually sets our approach apart? Fair enough!!!!

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wnoxp32ng9v8t7ajwhip.png"
  alt="Migration differentiator"
  caption="Migration differentiator"
/>

1. **_Domain Driven Structure_**: Instead of throwing all migrations into a single directory, we have organized them by domain:

```text
api/migrations/
               ├── applications/   # App deployment features                   
               ├── audit/          # Audit logging and compliance
               ├── auth/           # Authentication & authorization
               ├── containers/     # Container management
               ├── domains/        # Domain and DNS management
               ├── feature-flags/  # Feature toggle system
               ├── integrations/   # Third-party integrations
               ├── notifications/  # Notification system
               ├── organizations/  # Multi-tenancy & organizations
               ├── rbac/           # Role-based access control
               └── users/          # User management and profiles
```

This structure makes it easy for us to find and create migrations related to specific work or flow, helping us avoid confusion or conflicts.

2. **Automatic migration execution on application startup**: One of the key design decisions was to make migrations completely automatic. When the application starts, the migration system runs before any other initialization. This approach eliminated the need for separate deployment and ensures that the database schema is always up to date when the application starts.

3. **Atomic, Transactional Migrations**: Every migration runs inside a database transaction, ensuring atomicity. If any part of a migration fails, the entire migration is rolled back, ensuring that the schema remains fully consistent.

4. **_Bidirectional Migrations_**: Every migrations have 2 files:

- `seqno_entity_up.sql` (applies the change)
- `seqno_entity_down.sql` (rolls back the change)

This ensures that we can always roll back changes if something goes wrong.

As we are close to concluding the deep dive into the first of many articles of the **_Inside Nixopus_** series, I would like to highlight some of the major learnings and key takeaways:

1. **Keep each migration small and focused**for easier review or rollback.
2. Ensure to keep a _**down migration** for every **up migration**_, ensuring roll backs are easy.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cz9ijjmrruycdzpx65cn.png"
  alt="Key learning takeaways Nixopus"
  caption="Key learning takeaways Nixopus"
/>

The key insight is that sometimes the best tool is the one you build yourself. By understanding our specific needs and constraints, we were able to set up our migration system that fits perfectly into our development workflow and easy self-hosting.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66k8t8vncrm2g0vzpkym.png"
  alt="Nixopus"
  caption="Nixopus"
/>

The project, as we publish this article, is in the Alpha stage.

You can check it out on [GitHub](https://github.com/raghavyuva/nixopus) and see for yourself.

To sum it up, the approach helped us change schema changes from a headache to a reliable process.

If you would like to get involved or have questions, join our Discord community for real-time support and feedback. You can self-host Nixopus today, subscribe for updates, and stay tuned as we roll out new features and stability enhancements.

<Image
  src="https://user-images.githubusercontent.com/31022056/158916278-4504b838-7ecb-4ab9-a900-7dc002aade78.png"
  alt="Join our Discord Community"
  caption="Join our Discord Community"
  href="https://discord.gg/skdcq39Wpv"
  width={200}
/>

We have recently collaborated with **_HostUp_**, a reliable VPS provider based in Sweden, to bring you an exclusive deal of **_10% off_** recurring on any VPS plan. Whether you choose to self-host Nixopus or deploy containerized apps, this is the perfect opportunity to secure rock-solid infrastructure at a [Discord Community](https://discord.com/invite/skdcq39Wpv).

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/itnnybtsdrgzmv5dj0lz.png"
  alt="HostUp partnership"
  caption="HostUp partnership"
  href="https://hostup.se/en/vps/"
/>

Stay tuned for more freshly brewed content.

That's all for now. Thank you for reading.

Signing off until next time.

Thank you.

---]]></content:encoded>
        <pubDate>Fri, 27 Jun 2025 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Nixopus: Simplifying VPS Management]]></title>
        <link>https://ohmyscript.com/blogs/nixopus-vps-manager/</link>
        <guid>https://ohmyscript.com/blogs/nixopus-vps-manager/</guid>
        <description><![CDATA[How Nixopus streamlines VPS setup, deployment and server-management with a modern open-source platform.]]></description>
        <content:encoded><![CDATA[# Nixopus: Simplifying VPS Management

This article is my reflection on the journey of how I relied on free tier clouds and platform specific hosting until I discovered VPS.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h9rvhdq7040pvgrt9mk.png"
  alt="My journey from using cloud setup to VPS"
  caption="My journey from using cloud setup to VPS"
/>

> Shipping Code while Sipping Coffee

## What is Self Host?

Back then, I was still trying to figure out and understand how all this worked. Starting from which command did what, which service refused to start, and why things that worked on my local; didn’t work when deployed (Thank God for Docker)

I still recall one of my earliest deployment. It was [github-readme-quotes](https://github.com/shravan20/github-readme-quotes) app on Heroku. A simple git push to deploy felt magical. I didn’t really have to care how it worked. I just knew that if I followed the documentation devoutly, it was assured to work; if not, StackOverflow always had my back.

This was the same with Firebase, Render, or Netlify for static stuff. All of them taught me how to ship quickly.

## Why Self Host?

But then came the real world; need to host private apps, things that need ports, memory tuning, background workers, cron jobs, etc. Soon, those magical platforms started speaking back and complaining to me:

- _"You can’t bind that port"_

- _"You’re out of free dynos"_

- _"Exceed dynos quota for the month"_

I found myself stuck on an architectural impasse till the day when one of the engineers I know suggested saying these lines, “Just spin up a VPS“. That advice removed the roadblock.

## Can I rent Server at cheap?

To those who are not aware of what is **_VPS_**? How is it different from Server? Let me try to explain it with an analogy:

> Imagine you've built a website and now you want to make it public, so you need to host it. Basically put it out on a computer that is always online. This is called hosting.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2j7ke33delch7jw8wo9x.png"
  alt="How to host your site"
  caption="How to host your site"
/>

## Types of Server I can rent

Before hosting a website, there are 3 types of hosting, each of which balances cost, control, and performance in its own way. Let us take an analogy of hosting with where we live:

- **_Shared Hosting_** is like renting a room in PG, which is extremely affordable, but you share your resources with others and have very little control

- **_Virtual Private Server_** (aka VPS) is like having your own apartment, you get private space, you are secure, decorate and customize as you want, and truly call your own.

- **_Dedicated Server_** is like leasing an entire bungalow; you alone enjoy all its resources and full autonomy at a very high price.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78id0zjgbqfldv4sejxw.png"
  alt="Understanding VPS"
  caption="Understanding VPS"
/>

**Dedicated Servers** are either some crazy big computers in freezing cold data centers or virtual computers carved out of bigger machines using virtualization.

So, yep, VPS was that sweet middle ground; cost of a simple 1 BHK flat with freedom to knock down a wall (just kidding). So that balance I needed came in the form of VPS.

So I picked a VPS provider, bought a VPS server for 7€ (~₹697). I got an IP address, username(root), and SSH details.

Till then, I had learnt what VPS was and lil about behind the scenes from the few Google searches I had done. I understood that a VPS is a virtual machine running on a larger physical server with its own dedicated RAM, disk space, and CPU. And fortunately, since I was already comfortable with Linux commands, so felt navigating this environment shouldn’t be a problem.

Since I had previously tinkered quite a lot with Raspberry Pi, by hooking it to the local network and SSH'ing into it from my laptop, I already knew that the only way to connect to my VPS was using SSH.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3pc43sa7g5oxh8ghng7y.png"
  alt="SSH connection to VPS"
  caption="SSH connection to VPS"
/>

## Can I interact with the remote server?

To those who do not know what SSH means, let us again explore it with an analogy:

**_SSH is like a passport to access your VPS_**. With SSH, you can remotely step inside your VPS like teleporting into a distant terminal. It was like I'd opened the door to another computer without leaving mine. There is no UI, browser, but a simple terminal access for your VPS and raw power.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cd849rottalq04z538aw.png"
  alt="Linux terminal"
  caption="Just Linux haha"
/>

Every server I touched so far ran Linux, so I already had a clear cut understanding. So for everything looked good.

## What should I do to deploy?

I started slowly moving my apps to the VPS, and running the scripts to have it run on the VPS. I ran into another hurdle; I was only able to access the deployed applications or services using their IP address, like `http://123.45.67.89`. Unlike Heroku or Netlify, which provided me ready made subdomain for a project like [`github-readme-quotes.herokuapp.com`], my move to a VPS setup left me staring at a bare IP address.

I knew I needed a proper name and URL, `some-project.com`, and that’s where DNS comes in.

To those who do not know what **DNS** is, consider it as a **Internet's Yellow Page (Phonebook)**; basically translates domain names to IP addresses.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h16osgkfsihc61kau7nj.png"
  alt="DNS as Internet's Yellow Pages"
  caption="DNS as Internet's Yellow Pages"
/>

A lot of them still find tying a domain to a VPS surprisingly tricky. Hence, I keep a simple recipe:

1. You buy a domain from Domain Registrar
2. Point it to your VPS IP using A records in your registrar’s control panel

```bash
Type: A
Host: @
Value: 123.45.67.89
```

3. Set up Nginx
4. Configure SSL

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39wnt05pkg4pq4veabaw.png"
  alt="Nginx and SSL setup"
  caption="Nginx and SSL setup"
/>

You might be wondering, what are these Nginx and SSL? Let us understand this better, diving a little deeper into the concepts.

Every service on your server or computer communicates over a specific port; some of them are already predefined, say:

- `SSH` on port `22` for remote access
- `HTTP` on port `80` for unencrypted web traffic
- `HTTPS` on port `443` for encrypter web traffic via SSL/TLS

By now, you must have understood that SSL/TLS is what turns ordinary HTTP into secure HTTPS, making sure that the data between user and server stays private and secure.

Nginx is the traffic police on my VPS which is mainly responsible for forwarding user requests to respective applications, handling SSL encryption, and serving the static contents.

You must be wondering, **_"Voila, Great, now that everything is in place, and the hard part is over, life must be easier"_**.

Hahaha!! Well, thats exactly when the real problems began. Over time, deployments process became a familiar and routine; rather painful routing I would say.

This routing masked a lot of headaches: broken configs, panicky Nginx configuration tweaks, dread thought of “Did I actually run the right script?“ and manual responsibility of deploying all of these without any automations and verifying the releases.

Eventually, I scripted it all, but I can never forget those initial days of manual, shaky deployments. They taught me the cost of carelessness, where I had spent a day to find one config misplaced. It wasn’t about skill, but rather about mental tax caused due to this.

By then, I had realised managing multiple applications on a single VPS only made things worse, where I had to juggle and handle process monitoring, checking the logs, and mainly running update scripts.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dj6vony210gqg82f3sey.png"
  alt="VPS monitoring"
  caption="VPS monitoring"
/>

> With power comes great responsibility

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7hqma5tahj5zzo2eigy.png"
  alt="On a lighter note"
  caption="On a lighter note"
/>

With experience comes great clarity; I started seeing patterns in the process I followed. I’ve been doing this for a while now, everytime I spun up a VPS or deploy new project on a VPS, I found myself re-Googling the same 20 commands. I observed that across some major pain points:

- Copy-pasting Identical configurations
- Deployment failures
- DNS setups
- Setting up Firewall rules
- Forgetting about SSL renewals

It was the same story day in and day out. That is when I asked myself, is this problem not actually solved and standardized yet? Do we not have a tool that streamlines and simplifies the VPS management?

## Discovering Nixopus

My search led me to Nixopus, an indie tool built by a community led by Raghav to tackle these exact frustrations. As we talked, it became clear that Nixopus wasn't born from ambition rather from fatigue, the kind that drives you to refactor your life. It promised everything I had been hunting for.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ois17ln8re3q31a9gekb.png"
  alt="VPS Management Solution"
  caption="VPS Management Solution"
/>

Though in Alpha stage, I’ve started using it for managing my VPS and even actively contributing to improvements whenever I can. I am convinced that this project has a bright future. You check it out on [GitHub](https://github.com/raghavyuva/nixopus) and see for yourself.

If you would like to get involved or have questions, join our Discord community for real-time support and feedback. You can self-host Nixopus today, subscribe for updates, and stay tuned as we roll out new features and stability enhancements.

We have recently collaborated with HostUp, a reliable VPS provider based in Sweden, to bring you an exclusive deal of 10% off recurring on any VPS plan. Whether you choose to self-host Nixopus or deploy containerized apps, this is the perfect opportunity to secure rock-solid infrastructure at a [Discord Community](https://discord.com/invite/skdcq39Wpv).

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/itnnybtsdrgzmv5dj0lz.png"
  alt="HostUp partnership"
  caption="HostUp partnership"
  href="https://hostup.se/en/vps/"
/>

You can claim your VPS coupon and start building with Nixopus today.

Join our [Discord](https://discord.com/invite/skdcq39Wpv) community, you’re always welcome to hop in for community support!

Summing it up, my journey from building on free-tier clouds and shared hosting to fully embracing VPS has taught me one thing is that having control doesn't have to mean endless repetitive manual work and management. Nixopus bridges that gap, so that from now on you can focus on what really matters.

Stay tuned for more freshly brewed content.

That’s all for now. Thank you for reading.

Signing off until next time.

Happy Learning!!]]></content:encoded>
        <pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[What is Second Brain]]></title>
        <link>https://ohmyscript.com/second-brain/index/</link>
        <guid>https://ohmyscript.com/second-brain/index/</guid>
        <description><![CDATA[A digital garden for thoughts that don]]></description>
        <content:encoded><![CDATA[# What is Second Brain

This is my second brain; a place where ideas live, breathe, and connect in ways they never could in a linear blog post.

No polished narratives, no perfect structure. Just raw knowledge, half-formed thoughts, and connections waiting to be discovered.

Think of it as a digital garden where concepts grow organically. Some notes are seeds, others are in full bloom. They link to each other, reference one another, and evolve over time.

Unlike blogs that freeze ideas in time, these notes are alive constantly being refined, expanded, and interconnected.

Here you'll find everything from technical deep-dives to fleeting observations, system architectures to philosophical musings. If blogs are performances, this is the rehearsal space where the real work happens.]]></content:encoded>
        <pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[What happens when Everything goes wrong: QA Testing Gone Wrong?]]></title>
        <link>https://ohmyscript.com/blogs/qa-gone-wrong/</link>
        <guid>https://ohmyscript.com/blogs/qa-gone-wrong/</guid>
        <description><![CDATA[An exploration of a bizarre npm prank package called ]]></description>
        <content:encoded><![CDATA[# What happens when &quot;Everything&quot; goes wrong: QA Testing Gone Wrong?

## Overview

This is an article on recent event of QA gone south, causing the chaos in the whole NPM ecosystem

## Background

A software engineer viz., PatrickJS having an NPM user account gdi2290, recently
orchestrated a notable prank on the NPM registry by introducing a package named "**_everything_**."

## What happened

<Image
  src="https://substackcdn.com/image/fetch/$s_!zuSV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307ca914-57bf-4475-9956-e11235364bd0_888x632.png"
  alt="image"
  href="https://substackcdn.com/image/fetch/$s_!zuSV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307ca914-57bf-4475-9956-e11235364bd0_888x632.png"
  caption="Callback Hell? PratickJS be like lemme show what is Dependency Hell!!!"
/>

Crafted with the deliberate intent of depending on every publicly available NPM package, this package has triggered a cascade of millions of transitive dependencies, precipitating a state of Denial of Service (DOS) for users who choose to install it. The result of the fallout would cause exhaustion of storage/disk space, further leading to breakage of the build pipelines.

**_But, that's just the tip of the iceberg._**

## Fallout

<Image
  src="https://substackcdn.com/image/fetch/$s_!nQCR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6edce6e0-4ed9-40e2-b136-76db13b4a2f1_887x500.png"
  alt="image"
  caption="The audience echoed a similar sentiment"
/>

Even if one might question who would intentionally install such a package, the broader consequence is that the creators of the "everything" package have made it impossible for npm package authors, who have ever published their packages to the npm registry, to remove their packages.

This is due to npm's policy, which prevents the deletion of packages that are currently in use by other projects. In essence, the extensive dependencies created by the "everything" package have inadvertently trapped other packages, hindering their authors from removing them when needed.

## Similar incidents

**_This incident isn't an isolated occurrence._**

<Image
  src="https://substackcdn.com/image/fetch/$s_!ZKa9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd228983-0a6f-406b-ba20-cab82978a34a_461x322.png"
  alt="High Quality Shark Tank India Aman Blank Meme Template"
  caption="Similar incidents have happened in the past too"
/>

Approximately a year ago, a situation reminiscent of the current one unfolded with the "no-one-left-behind" package. This package intricately wove itself into a convoluted web of dependencies, relying on every publicly available NPM package.

In 2016, involving the removal of the left-pad package from the npm registry and causing widespread internet disruptions, prompted npm to enact changes. A significant policy alteration ensued, making it more challenging for authors to unpublish packages. They could now only do so if no other package on the npm registry depended on it. Ironically, this policy has now left PatrickJS, the mastermind behind "everything," grappling with the difficulty of removing his prank packages due to the extensive dependency chain he meticulously established.

## Current status

<Image
  src="https://substackcdn.com/image/fetch/$s_!0_PC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b1124d-4702-41fa-883f-d4385802f2e6_1024x614.jpeg"
  alt="High Quality CHAD Ashneer Blank Meme Template"
  caption="From different sources, understood that they have made the registry private and that has resolved"
/>

While "everything" continues to persist on the registry, the numerous "@everything-registry" scoped packages associated with it have been made private, potentially offering a resolution to the [issue](https://github.com/everything-registry/everything/issues/17) (inaccessible since it was made private).

## Links

<Image
  src="https://substackcdn.com/image/fetch/$s_!u6u_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84a0265a-18d4-452a-84dc-fc5caaf2bbbf_680x413.png"
  alt="image"
  caption="To amplify the mayhem they instigated, the creators went ahead and secured the domain as well xD"
/>

Not so fun fact: To further accentuate the chaos they caused, the creators registered the domain [https://everything.npm.lol/](https://everything.npm.lol/).

---

## Closing

That's all for now! Take care until next time.
Subscribe for free to receive the latest brewed updates.
Curated with ♥]]></content:encoded>
        <pubDate>Thu, 11 Jan 2024 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Must follow Ethics behind Pragmatic Programmer]]></title>
        <link>https://ohmyscript.com/blogs/pragmatic-programmers/</link>
        <guid>https://ohmyscript.com/blogs/pragmatic-programmers/</guid>
        <description><![CDATA[Fundamental ethics every engineer should follow to build quality, responsible software.]]></description>
        <content:encoded><![CDATA[# Must follow Ethics behind Pragmatic Programmer

This article speaks about the fundamental ethics that every programmer/developer must follow, so, as to be one of the elites and the best in the evolving Software and Technology Industry.

Most developers fail to understand two things, firstly to understand:

1.  What do people mean when they say **Pragmatic**?
2.  What does it take to become a programmer following **Pragmatism**?

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*SdvdxxM4SuPah8wy9xQ9nA.jpeg"
  alt="Derives from Latin /praɡˈmatɪk/ (adjective)"
  caption="Derives from Latin /praɡˈmatɪk/ (adjective)"
/>

## What is pragmatism?

In the simplest words that I can put up, **being** **pragmatic** can be interpreted
as “**_Being Skilled in Craftsmanship_**”.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*7HZJUhei-G1UJUQHzYUjMA.jpeg"
  alt="Pragmatism is about practicality"
  caption="Pragmatism is about practicality"
/>

Pragmatic is all about practicality and being guided by “_the priority of action over doctrine and of experience over fixed principles_”.

> # “ I am this person who values craftsmanship, not because the end is a result of beauty, but an investment for my future. ”

**_Being pragmatic_** is about being a better developer. It is less about skills and more about experience and practice. It takes a lot of practice and experience to evolve as a pragmatic developer.

With every new experience, you take away something new that you learn, which definitely will connect to the dot as you move forward to face new challenges.

A developer must understand that every software that you develop is a craft. Each person who involves in building that craft is equally responsible. In such a scenario, how do you think you can **pragmatism** help?

## Why be pragmatic?

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*UQwODaVTQg56NaZDaZAZSQ.gif"
  alt="Scenario: construction of a house"
  caption="Scenario: construction of a house"
/>

For instance, consider a scenario of the construction of a house, in which various craftsmen have an important role to play. From Architect, Civil engineers to Painters, Laborers, all of them have their own skills that contribute to beautifying and completing the construction of the house.

In the same context, when you are a part of a team that is building a product, your individual contribution plays a very vital role. Irrespective of whether you are part of the backend/frontend/DevOps team, it doesn’t matter. Your contribution is important, as you, on the whole, will be representing the team building the product, not just as an individual.

As said, Pragmatism is more about how you can make a quality contribution to the team, not just as a developer but also as a team member.

Wondering, what does it take to follow **_pragmatism?_** Let us figure out what distinguishes Pragmatic Programmers from the crowd.

### Being Responsible

In every project, there will definitely be problems. Problems are packed with every project you will be working on, despite taking every safety measure, i.e., cross verifying the Minutes of Meeting, documentation, testing, etc.

Despite taking all the measures, the possibilities of deliveries being delayed or unforeseen technical problems may come up, which is unavoidable at times.

Take responsibility and be accountable for your responsibility. Ensure you are communicating and keeping the team informed with all the updates.

The smallest mistakes during the presentation of your product to the client can lead to catastrophic damage to the reputation of the company or firm that you are representing.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*am9H6o14JThJxN1m-ko5hw.gif"
  alt="communicate-developer"
  caption="Communicate: Verify what you have understood"
/>

Miscommunication or lack of communication can cause a lot of misleading in the team or within the project. When I was getting out of my Intern developer phase, we had to showcase our work to the client. The backend team had to make some changes on functionality at the last moment and had conveyed it to the mobile team but failed to inform it to the web team, expecting it would be conveyed. This ended up causing a lot of fuss.

Ensure that everything is conveyed to the team properly and avoid miscommunication. If you or the team feel the product is not in the best shape for the presentation, make sure you ask for an extension by taking responsibility for what has happened and ensuring not to repeat it.

If we can accept and be proud of our ability, we must accept and be honest about our shortcomings.

Assuming a situation where you are not able to solve a problem due to the shortcoming of resources or any other issue, avoid presenting the problem/shortcoming you are facing for the given problem statement. Rather present the reasons why you cannot solve them and provide them with alternative solutions.

### Avoiding Tech Debts - Software Entropy

When you work in a team, you are a part of a group sailing in the same boat to build a product. A leak on a boat (similar to the wrong decision or code that you put in the codebase) can cause the boat to rot (technical debt) and sink, leading the boat to sink along with people in it.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*7Wwp4nbjrjo4wDM5jbLLGQ.jpeg"
  caption="Avoid blame game or safe game, be accountable"
  alt="Avoid blame game or safe game, be accountable"
/>
It is our responsibility to fix those leaks in the first place when we figured it.
Do not leave such loopholes open even if the deadline is near. If it is not a serious
one, add ToDo and fix it later.

In one of the projects, I was working on, we initially did not structure the code when we had 2/3 services to deal with, due to time constraints. As we kept on building features on top of them, we ended up with 12 services and a whole codebase beyond repair and in the end, had to structure it in a single go which was such a pain.

### Take the initiative, be the catalyst

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*CkCbCusvdooqolMJycindA.png"
  alt="Take responsibility to fix issues"
  caption="Take responsibility to fix issues"
/>

Sometimes it so happens that we see some dirty code. When it does happen, try to convey that to your team. If time constraint does not allow the team to work on it and fix it, the best you can do is flag it.
In a situation when pieces of code are such that you have to fix it (severe bug), inform the team, take the initiative and responsibility of fixing the broken code and get it reviewed.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*vM3bXTqhh4x_Jioy4vaJVA.gif"
  alt="Gardening === Software Development process"
  caption="Gardening === Software Development process"
/>

Gardening is the closest analogy to the development process. Just like how once you plant a sapling, and you need to take care of it every day, the same way as you build the code, ensure no part of the codebase is getting rotten. Once you develop the software, it is necessary to ensure and nourish it carefully.

Whenever new requirements pitch in, ensure that the development of the new feature doesn't affect or cause a lot of change in the existing functionality. Also, ensure that the addition of the new requirement is done in such a way that there is space for another set of new requirements.

### Good enough isn’t good enough

How can we evaluate that the product/software we are building is good enough?

Good enough can have two meanings:

- **Fully satisfactory**: Requirements are met, expectations are satisfied, the underlying problem is solved.

- **Barely adequate**: Lowest level of performance that doesn't qualify as a failure

“**Good Enough**” isn’t “**Good Enough**”, in the sense that “**Fully Satisfactory**” is not the same as “**Barely Adequate**”.

> # “ How can we know if it’s good enough? ”

Perfect Software is a myth. Perhaps the key to the right balance is to understand the perceptions of the user and stakeholder.

The software/products you build are going to be used by a certain set of users or by general users. It is always best to get feedback from the end-users from their user experience. We can release a beta (release candidate) version, roll it out to your stakeholders, other team members, and possibly end-users for feedback.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*guVWAY5WFsUo2Kzvw5plPQ.png"
  alt="Minimum Viable Product"
  caption="Minimum Viable Product"
/>

As I said earlier, perfect software is a myth. Despite being a myth, there is something that we call a Minimum Viable Product ( MVP ). Every product that you are building, will have minimum viability to be accepted as a functional product.

MVP can be best described as a mid-point between the earlier stages of the development process and the final product, meeting all the fundamental requirements of the product.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*MYlzSr7XfAlQheUDyrV0IA.gif"
  alt="Minimum Viable Product"
  caption="MVP"
/>

Assume a mansion is under construction, initially when a mansion is under construction, it doesn't fit good for you to stay in it. But as the construction of the mansion is completed with required carpentry work, it becomes fit (viable) enough for you to stay.

Same way, when a product solves the fundamental needs of a given problem statement, it can be considered as a _Minimum Viable Product._

> # “ Apple shipped the first version of Hypercard with about 500 known bugs in it, yet the product was a smashing success. The Hypercard QA team chose the right bugs to ship with. They also chose the right qualities… it isn’t the number of bugs that matters, it’s the effect of each bug. ”
>
> # - [James Bach](https://www.egr.msu.edu/classes/ece480/capstone/docs/good_enough.html), person who coined the term **Good Enough Software**

As we move on, there definitely must have been a situation, where you might have added some functionality as an add-on even without being mentioned in requirements based on your astrological assumptions.

The add-on that you add to the project, might be the one thing that is not necessary and ruins the quality or standards of the project.

<Image
  src="https://cdn-images-1.medium.com/max/2400/1*VVmaldTfICmsQQm3WcGqjA.jpeg"
  alt="Do not let add-ons deviate from actual requirements"
  caption="Do not let add-ons deviate from actual requirements"
/>

Identify the expectation based on the timeline and the resources given. Give the best for programmers, end-users, and future code maintainers. Nothing is always perfect. And most importantly, we need to know when to stop. Do not kill the good program with over-engineering.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*rOP00dE5OL0_8pX9e2JPKQ.png"
  alt="Avoid Over-Engineering"
  caption="Avoid Over-Engineering"
/>

### Knowledge Portfolio, long-term investment

As you all know, Software and Technology Industry is an ever-changing landscape, where technology and the science behind the technology are evolving. With the pace of evolution, just as important as it is to be an expert in a field of certain technology, it is equally important to keep yourself updated with regard to that technology.

If you fail to keep yourself updated in regard to technology you are working on with the time you will be considered as an outdated resource.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*SVQ9r-dpGMBEKZWmnDqJ1w.gif"
  alt="Knowlege Portfolio, a type of investment with ensured ROI"
  caption="Knowlege Portfolio, a type of investment with ensured ROI"
/>

Investing in Knowledge Portfolio is a type of investment that has an assured ROI.

> # “An investment in knowledge always pays the best interest.”
>
> #- Benjamin Franklin

Considering Knowledge Portfolio as an investment, you can classify the investments into two types:

- _Low Risk, High Reward Investments_

- _High Risk, High Reward Investments_

**Low Risk, High Reward Investments** are the types of investment where you are investing your time in learning and sharpening your skills in the technology that you are proficient in.

On the other hand, **High Risk, High Reward Investments** are those investments in the knowledge portfolio, where you are not sure about the future of that technology but currently is in the trends.

You should aim to find emerging technologies and learn them before they become popular.

> # “ Learning Java when it first came out may have been risky, but it paid off handsomely for the early adopters who are now at the top of that field. ”
>
> # - The Pragmatic Programmer

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*Smpv5oXpeU7XcmW86CvLvQ.jpeg"
  alt="Assures the best ROI for each investment"
  caption="Assures the best ROI for each investment"
/>

Keep investing regularly with diverse technologies. Always stay contained with the spirit of learning new things that may enlighten you to think out of the box.

Always be skeptical and analyze what you have heard or read about.

### Communication, a lethal weapon

Communication is an important factor that as a developer we fail to fulfill sometimes. Communication is an art of transmitting information, ideas, and attitude from one person to another. Communication is a process of meaningful interaction among humans.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*p7PgKTQ_ESQb83yffmYg5g.png"
  alt="Best ideas or suggestions or plans cannot take form without communication"
  caption="Best ideas or suggestions or plans cannot take form without communication"
/>

As a developer, we have to communicate at various levels, starting from communication within the team up to communication with the client.

Most of the time we fail to identify and understand the WISDOM principle behind communication.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*RUvGy4-YqsZzYI4NLXjpeg.gif"
  alt="WISDOM Principle of Communication"
  caption="WISDOM Principle of Communication"
/>

One of the most important factors that play a vital role in realizing who you are conveying your idea.

> As a product owner, I would want to know about the product based on it’s viability and the problem that it solves.
> As a developer, I would want to know how we can solve the problem statement or deal with it, technically.
> As an end-user or client, I would want to know more about how does the product works and how does that fulfil all my requirements.

This is certainly something that we always miss out on. Despite being the best communicator in the team, if you explain the aspects and flaws of the products technically to the client, it is less likely expected for them to understand it unless they belong to an IT background or have some prior knowledge.

As far as effective communication is considered, listening is also as important as conveying your idea. Always be an attentive listener.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*-lOY-YKFDL2KqutBMbCHMg.jpeg"
  alt="Be an attentive listener"
  caption="Be an attentive listener"
/>

Finally, how you communicate your idea to the other person contributes to building an effective exchange of ideas. Every meeting not necessarily has to be some sort of presentation of your idea. Some might just be about mailing minutes of the meeting and cross verifying it.

<Image
  src="https://cdn-images-1.medium.com/max/2000/1*sRgj76Mv_8Iz-62GEHS6RA.png"
  alt="Always cross-verify your understanding from the exchange of ideas"
  caption="Always cross-verify your understanding from the exchange of ideas"
/>

And however you are doing it, make sure you are doing it right. Even when you write emails or minutes of the meeting, ensure there are no grammatical errors or spelling mistakes.

This is vital because, when these mistakes happen with the client we are working with, it speaks more about the firm you are representing and not just yourself. That is why it is necessary to be careful regarding these minute knots.

<Note>
  Communication is not just about what you say. It is about what you say and the
  way you
</Note>

<Image
  src="https://cdn-images-1.medium.com/max/2082/1*nMyZs3Z-bJzgWvWM-VUknA.png"
  alt="Choose the right form of Communication: You could have just emailed instead of a presentation"
  caption="Choose the right form of Communication: You could have just emailed instead of a presentation"
/>

Always respond to emails, missed calls, Slack messages, whatever.

<Note> _You don’t have to immediately respond to everything!_ </Note>

Documentation is another means of communication, basically a written proof of the process flow of functionality. In the software and technology industry, documentation plays an important role in keeping requirements intact to the final functionality.

Always ensure that you document the minutes of every meeting with the client and within the team.

I know this has been a bit longer post than expected. If you got this far, I appreciate your patience and thank you for your time. I had too many thoughts to put up, but adding them all onto a single post may not have been the right idea. Anyhow, wrapping up the post, these simple baby steps are the fundamental ethics that will make you stand out from the rest of the developers out there. As a developer, we should and must strive to be better than we were yesterday. That was my first step towards being pragmatic, and probably, I believe it could be yours as well.

I hope this article has helped to figure those minute things that you missed out on. Stay tuned for more.

Happy Learning! :)

---]]></content:encoded>
        <pubDate>Sat, 15 Jul 2023 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Bit Wars: 32-bit vs 64-bit Systems Explained]]></title>
        <link>https://ohmyscript.com/blogs/bit-war-32vs64/</link>
        <guid>https://ohmyscript.com/blogs/bit-war-32vs64/</guid>
        <description><![CDATA[easy-to-follow comparison of 32-bit and 64-bit systems; memory, performance and why it matters.]]></description>
        <content:encoded><![CDATA[# Bit Wars: 32-bit vs 64-bit Systems Explained

## Explainer

- A 64-bit system is like a highway, while a 32-bit system is like a bypass road.

- A 64-bit system allows more data to be transmitted simultaneously, resulting in improved overall speed and performance.

<Image
  src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/go7yje8naozdtpkkvc10.jpg"
  alt="32-bit vs 64-bit analogy"
  caption="32-bit vs 64-bit analogy"
/>

## Additional Context

<Tip>
  A 64-bit system can handle more data at once than 32-bit, like using 64
  fingers instead of 32, hence the analogy of a Highway with multiple lanes and
  Bypass road with 2 lanes.
</Tip>]]></content:encoded>
        <pubDate>Mon, 20 Jun 2022 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[My Journey from Intern to Developer]]></title>
        <link>https://ohmyscript.com/blogs/from-intern-to-developer/</link>
        <guid>https://ohmyscript.com/blogs/from-intern-to-developer/</guid>
        <description><![CDATA[A reflection on the growth, lessons, and mindset shifts from intern to developer.]]></description>
        <content:encoded><![CDATA[# Reflecting on My Voyage as an Intern to a Developer

Hi everyone!

This is an article that expresses a few major learnings I would like to take away and mistakes I wouldn’t want to commit again from my journey, and maybe stand out like a guide book for those who are starting their career in Software Industry. I have tried to put up my learning and takeaways from my voyage in the simplest way I can.

<Image
  src="https://miro.medium.com/max/2400/1*cD-wC5pUgL-mSI8bpyYF9A.png"
  caption="Learning is a Continuum Concept, where at every level and stage of your life, you learn and take away something from your previous experiences"
/>

I am closing in on completing two years of my career at a startup, starting as a Graduate Intern Developer to Junior Developer and what a journey it has been. I have learnt a lot during the course of this ride. I explored newer nooks of the developmental and DevOps technology. I have made mistakes and learnt from them.

During the Internship experience, I encountered a bunch of challenges that are very much typical to all, who is going through a transition from the College Graduate phase to the Working Professional phase. Likewise, I am going to address the challenges that I had faced along with the changes in my perception while growing as a working professional.

## Takeaways

**Some takeaways so far from my experience:**

### 1. Tutorial Hell

Initially, when we start out as newbies, it is quite common that we prefer to learn videos from Youtube Tutorials, Udemy or any other LMS application. Some might prefer following and reading from some open blogs like Freecodecamp or Medium blogs.

Now, let us first understand what is Tutorials Hell?

> **Tutorial Hell** is a typical situation, where you find a lot of tutorials and you are not sure about which one to follow and learn from.
> Assuming, let us say somehow you figured out which tutorial to learn from. Now you struggle with what to do/build with what you have learnt.

Initially, I had a very hard time getting through this situation. I was learning Node.js and I was very new to the Event-Driven Programming Paradigm and had a lot of confusions about fundamentals, despite I had followed one of the many tutorials available.

Generally speaking, I do not have any issue with tutorials, but I find that most of the tutorials, always tend to miss 4–5 core concepts, as they expect you to have a technical grasp of the same. Those missed out concepts create a lot of voids as you go ahead.

Let me give you an instance from my own experience. If you have worked on any JS-based framework/libraries, you must be knowing different ways of handling asynchronous operations (Callbacks, Promises, async-await). Those of you who do not know, do not worry, it is just 3 different ways of handling async operations. The point being, Callbacks are a real pain, whereas Promises and async-await is a better and cleaner way of writing code.

Initially, when I started out writing RESTful APIs, I followed the Callback way of handling asynchronous operation, because the tutorial I had followed did not bother to speak about Promises and async-await. For around a month or so, imagine my life, handling every DB calls and asynchronous functions as callbacks. It was too difficult to write simple logic, despite the problem statement was quite straightforward.

With time, as I kept exploring different technologies, one thing that I realised is that nothing beats the **OFFICIAL DOCUMENTATION**. Every technology that you want to learn, has its own Learning Management System or its own Official Documentation published, which surely covers every aspect of the technology that you are learning. Since then, any technology that I want to explore, I always prefer to follow the official documentation.

<Image
  src="https://miro.medium.com/max/300/1*dEaEwTrhabKDUxcmDpttDA.png"
  alt=""
  caption="Avoid Cobwebbed by tutorials"
/>

Finally, after we learn from the resource, follows another overwhelming situation where you are more confused about what to do with the updated knowledge portfolio?

Initially, since I was already working on a project, I could easily fill in whatever I learnt to fulfil the project requirements. It allowed me to constantly learn and explore.

There could be scenarios where you learn technologies out of the scope of your project you are working on. How to deal with situations in that scenario?

<Image
  src="https://miro.medium.com/max/700/1*V8wl8X5-fnrwRhNX3125Hg.jpeg"
  alt=""
  caption="Difference between Knowing vs Understanding"
/>

The best thing one should do after learning technology is **BUILDING SOMETHING**. Build, Create something you want. Be it simply for fun. Does not really have to be a real-time useful product. It is simply a product that you can apply conceptual learning to practicality.

If it’s a new programming language, you can explore more by trying to solve problems from HackerRank or other competitive platforms. Maintain a GitHub Repository to keep track of whatever you are learning with simple documentation for your understanding. This allows you to have your own documentation that you can look back into whenever you want. Creating and documenting the Proof of Concepts is a deal.

Meanwhile, **KEEP CODE-READING** from different GitHub repositories. I used to randomly code-read just to get a glimpse of different approaches to solving problems and writing code. This actually helped me improvise the way I wrote the code.

<Image
  src="https://miro.medium.com/max/700/1*G6IMBGB--Xr0iHrajQROpA.jpeg"
  alt="Open Source Contribution"
  caption="Open Source Contribution allows you to spread your wings and collaborate with people having different ideologies."
/>

Open Source Contribution allows you to spread your wings and collaborate with people having different ideologies.

One more way to get through this situation is to **CONTRIBUTE TOWARDS** **OPEN SOURCE**. Try to search for some Open Source Projects built on top of the technology and try actively contributing towards it or recreating your own project as a Proof of Concept.

### 2. Build Products, not Junk

This seriously was a huge misconception I had initially, where I thought trying to solve the problem statement and coming up with a solution, is the most important thing. Probably, because of an attitude sometimes your graduate syllabus forces you to inculcate, where finding a solution is considered important than any other factors.

There are two factors that we tend to fail to focus upon, firstly **END-USER** of the application and secondly **CLARITY over USER REQUIREMENTS**.

<Image
  src="https://miro.medium.com/max/700/1*1cujLlFONgNdPC5RtF1FrA.png"
  alt="User requirements mismatch"
  caption="What user wanted vs What you delivered"
/>

Sometimes we fail to understand the user requirements. At times, we misinterpret given User Requirement, due to our individual perception and experience, which, of course, is not a mistake. But, it is very important to clarify what you have understood with the client/user/product-owner.

It is always better to ask, in the very initial stage of the project, with the product-owner/client, whether your understanding of the requirements are accurate or not. When you question the client in the very first stage, you won’t end up building a product that was not required.

Similarly whenever, requirement pitches in between the development cycle, ensure you clarify that as well, just to make sure that your project doesn’t end up going south.

<Image
  src="https://miro.medium.com/max/602/1*x7B9p3FkuOID95MPyu3nIA.png"
  alt="Bad UI/UX door design example"
  caption="Bad UI/UX: Label [PUSH/PULL] and Handlebars. Why? Do I need the handlebar to push?"
/>

Always focus on building the product on the basis of how the product owner (End-User) wants it.

<Image
  src="https://miro.medium.com/max/658/1*p6-KTu5kcpbD-fVBPIgE9A.jpeg"
  alt="Bad elevator button design"
  caption="Bad Design Aesthetics: A button showing the corresponding number and another actual button to click the floor number"
/>

When you are building the product, ask yourself, “**As an owner of the product, does this make my work easier? Does this solve my problem? Is this solution feasible? Is this the only approach available? Are there better approaches?**”.

By the time, your research is done, and have it evaluated based on the above factors, you will have a conclusive and evident answer, portraying clarity over how to go about building the product with a better User Experience.

Keep constantly in touch with end-user and build the product always iteratively.

### 3. Knowing the Best Practices

To those who do not know what do I mean by **Best Practices**? Best Practices are evidently proved guidelines with a strong theory behind them.

<Image
  src="https://miro.medium.com/max/700/1*c6GeX3zdBru2d9yHjp-N4Q.jpeg"
  alt="Best practices analogy"
  caption="Daily Healthy Routine is the closest analogy to the Best Practices"
/>

Just like how we follow certain practices on daily basis like bathing, washing hands after eating, and so on, to maintain our hygiene and keep ourselves neat and tidy. Similarly, in software development, we follow a certain set of proved norms, just to ensure that the product we are building doesn’t rot with time or newer requirements.

Every technology you learn, there are always best practices tagged along with it. It is quite difficult to remember all the best practices. Over time, one thing that I realised and noticed about the **BEST PRACTICES** is that most of them try to learn and recollect the best practices. And fail to understand the theory behind the practice that we should follow.

If you understand the theory behind the practice, it wouldn’t be difficult to remember them while implementing it.

> Practice tells you that things are good or bad; theory tells why.

Let us take a simple instance, whenever we want to scale your project structure, how do you figure that out?  
We have a proposed model called the **Scaling Cube** Theory, which describes the basic principle behind the model and why should you consider it when scaling the project.

<Image
  src="https://miro.medium.com/max/427/1*cg9znQR8I2orO8ooLMCv4w.png"
  alt="Scale Cube diagram"
  caption="Scale Cube: Service Scalability Best Practices"
/>

Each axis of the indicates, as shown below:

- X-axis: **Scaling by cloning**, otherwise known as Horizontal Duplication. Usually monolithic projects when deployed, have multiple cloned copies of an application behind a load balancer. When you have only one service to deal with, we are mostly advised to go with **_Monolithic Architecture_**.
- Y-axis: **Scaling by splitting different services**, known as decomposition. Usually, projects with complex problems statements and several services, are often advised to split the application into multiple distinct services. When you take this kind of decomposition approach, we call it **_Microservice Architecture_**.
- Z-axis: **Scaling by splitting similar things**, known as Data Partitioning. In projects where robustness is a very high priority, improving Infrastructure from the Data Storage point of view helps a lot. In this kind of approach, there are several replicated copy of the code, but each of them is accessible only to a subset of the data.

If you see in the above instance, by knowing the theory behind the Scaling Cube, it is easier for us to understand which approach to be considered when we are building the project architecture, based on the business requirements of the project. When we apply Scaling Cube, it is easier to evaluate whether or not to scale the project.

### 4. Debugging

At the early stage of my learning, I devoted a lot of my time to debugging, since I was very new to the technology I was working on. And I did not have the grasp of the errors and crashes that followed. I always used to seek help from **StackOverflow** and **Grepper** extension for finding the cause and origin of the bug and do some quick fixes. As I kept exploring, I became quite familiar with some set of errors.

But as I kept exploring new services and stacks, sometimes it was due to bugs in packages that I am using, it so used to happen, where I encounter some new types of error. I couldn’t afford to spend more time to debug and resolve errors, so I started following a simple backtracking method to debug called **Five whys**.

Debugging is a sort of aggravating task when you can not figure out the origin and cause of the error. The five whys method was a very basic technique, which helped me determine the root cause of the error in the easiest way, by iteratively asking the question “Why?”.

<Image
  src="https://miro.medium.com/max/400/1*9vUBXY9HW42xWutui8QeIw.png"
  alt="5 Whys debugging technique"
  caption="5 Whys: Technique used by Toyota Motors for finding manufacturing defects"
/>

I used loggers to ensure from where exactly the issue has originated. This saves a lot of time. It is important to find the origin and root cause of the error.

Also ensure that you document the tracked bug, which is at the application level. It is important to document the bug, maintain the history of bug tracks and record the solution for the bug reported.

### 5. When you have to explore something new, create Proof of Concept

Whenever a new problem statement pitches in, something that you have not worked on. Always create a Proof of Concept for the same. Try out different possible solutions for the same. And do some research work on the different possible solution and make a simple Proof of Concept with enough documentation(for your teammates to follow up). Have a healthy discussion and take opinions from your team.

Proof of Concept is not final implementation, rather they are intended to provide proof that a given system would work effectively for the problem statement posed. Take feedback on the PoC, and also do not forget to keep them in your GitHub Repository for future reference for yourself and others.

### 6. Unit Tests makes your code better

I learnt this the hard way, but honestly speaking, Unit tests are the best way to catch bugs. In the initial stage, I hardly knew and cared about writing Unit Tests for the APIs, I used to often find myself concentrating on completing the tasks in the expected slot with good quality.

<Image
  src="https://miro.medium.com/max/225/1*jSLR-uO-8cBDuAvvWA6BKQ.jpeg"
  alt="Unit Testing illustration"
  caption="Unit Testing: Verify if the piece of code is doing what it is intended to do"
/>

Writing Unit Tests helps you to verify that the code is doing what it is intended to do. Unit tests always help and provide strong backbone support to maintain your code and safely refactor them from time to time. Sometimes, Unit Tests helped me to discover edge cases that I have missed upon. Since the time, I have learnt to write Unit Tests, I have always made it a habit to write Unit Tests for the code I write, which gives me more confidence in the quality of the code I deliver.

### 7. Maintain Documentation

**Documentation** is always the best way to define a feature from the User’s perspective. Before developing the feature, document the feature first. As a developer, always maintain documentation for the feature you are building.

<Image
  src="https://miro.medium.com/max/700/1*P9uD6rvLW1FhNPoFOIdz6w.jpeg"
  alt="Documentation quote"
  caption="Documentation is a love letter that you write to your future self - Twitter"
/>

Whenever you can, ensure that you have the document written has been reviewed by the end users and stakeholders before any development begins. As and when the feature is modified during development, make sure that the corresponding changes are documented. Just as documentations are modified, so should be the Unit Tests.

### 8. Writing Blogs

**Writing blogs** is useful for many reasons. Writing blogs will help you realise if you have understood the concepts well and if you are able to explain them in a way others can understand. As developers, we mostly work on creating and adding values to the product, we are building maybe by resolving a bug or implementing new features, which most of them do, but writing blogs would help you get a better understanding of the concepts and gives you a very good feeling about helping people. Some day, someone might read your content and may be able to produce a feature required for their project through your guidance. Your experience can help someone get proper guidance.

### 9. Contribute towards Open Source

Open Source has a great community built around. Contributing and being part of the Open Source Community allows me to explore and embrace newer perspectives. Contributing to Open Source helps me a lot in improvising my problem-solving skills.

I get to meet like-minded people and they help me inspire to become a better developer. It is always nice to be part of a peer, passionate about developing and building products. Trust me, it feels great to have your name as a contributor to someone’s project, which boosts your positivity and confidence.

### 10. Always be Open to Continuous Learn

Firstly make sure that you build upon your fundamentals. Keep your fundamentals strong. If your fundamentals are strong, switching between similar technologies and exploring them would not be a difficult task.

Keep exploring new technologies. The Software and Technology Industry is an everlasting industry that keeps expanding with time. As time evolves, the industry also keeps evolving with new technology arising every new day. Always ensure you are open to switch and learn, explore and practically work on those technologies.

Read Technical and Non Technical books to keep yourself aware of the revolutionary changes happening in the industry. Keep reading blogs published by Major MNCs and have an understanding of their system design and architecture and the theory behind it.

I always keep exploring different technologies, because it helps me have a wider perspective. Wider perspective helps you come out with better and creative solutions. I prefer to be a **Generalizing Specialist**.

> A Generalizing Specialist is jack-of-all-trades, and master of a few

### 11. Be polite

Life becomes much easier as a developer when you start listening to others. Always have a certain level of humility when listening to others. It is very important to be open to different perspectives and opinions.

> Every expert was once a beginner.
>
> You were a beginner before you reached this stage today.

Always be there for those who need your guidance and keep helping others learn and grow. In the process of guiding others and helping them expand their wings, there is a lot that you will explore and experience as well.

These were some of the major takeaways from my journey as an Intern to a Developer. I hope all the beginners who are reading the article, will find these useful for your journey. Those who have already passed this phase might find it very much relatable.

The post has been longer than expected, if you got this far, I appreciate your patience and thank you for your time.

If you like the article, hit the like button, share the article and subscribe to the blog. If you want me to write an article on specific domain/technology I am provisioned in, feel free to drop a mail at [hi [at] ohmyscript [dot] com](mailto:hi@ohmyscript.com)

Stay tuned for my next article.

That's all for now. Thank you for reading.

Signing off until next time.  
Happy Learning.]]></content:encoded>
        <pubDate>Thu, 13 May 2021 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[SOLID Principles: Write SOLID programs; Avoid STUPID programs]]></title>
        <link>https://ohmyscript.com/blogs/solid-principles/</link>
        <guid>https://ohmyscript.com/blogs/solid-principles/</guid>
        <description><![CDATA[Overview of the SOLID design principles in object-oriented programming and how they help avoid STUPID code patterns.]]></description>
        <content:encoded><![CDATA[# SOLID Principles
# Write SOLID programs; Avoid STUPID programs

<Image 
  src="https://iamskb258154309.files.wordpress.com/2020/09/dont-be-stupid-grasp-solid-north-east-php-1-638-1.jpg" 
  alt="SOLID principles vs STUPID code" 
  caption="Don't be STUPID, be SOLID"
/>
**_“Think Twice, Code Once”_**

Hi everyone!

Previously, in [my last article](https://ohmyscript.com/blog/engineering-principles-for-programmers/), I had explained some of the must know fundamental [programming principles](https://ohmyscript.com/blog/engineering-principles-for-programmers/), which are applicable in any programming paradigm that you follow. Be it **_Functional or Object-Oriented paradigm/programming_**, those serve as the **primary fundamentals**.

This article purely speaks of another 5 design principles, most specifically hold good to problems that can be solved using OOPs paradigm.

With the rise of OOPs paradigm, brought new designs and techniques of writing the solution to a problem.

Similarly, on a larger scale, this technique caused some flaws in the solution we design and write, which often we fail to recognize the bugs added in the form of **_STUPID code_**.

As I started programming in Typescript standards, implementing OOPS, had become easier, better, smaller and cleaner. I realised one thing after moving from **_Functional Paradigm to OOPs paradigm_**, that knowingly or unknowingly we end up implementing some sort of anti-patterns into our codebase.

## What’s a **_STUPID_** codebase?

> A STUPID codebase is that codebase which has flaws or faults, which affect the maintainability, readability or efficiency.

> Anti-Pattern Code == STUPID Code

## What causes STUPID codebase?

<Image 
  src="https://iamskb258154309.files.wordpress.com/2020/10/captured-4-1.png?w=1024" 
  alt="STUPID acronym breakdown" 
  caption="What causes STUPID codebase"
/>

## Problems / Anti patterns

**_Why be STUPID, when you can be SOLID_**

### Singleton

- **_Singleton_**: Violation of Singleton basically decreases the flexibility and reusability of the existing code, which deals with the object creation mechanism.
  It is an anti-pattern, where we define a class and its object in the same script/file and export the object for reusability. This is pattern is not wrong, but using it everywhere inappropriately is an symptom sick codebase.

```javascript
/**
*
*  Creating class Singleton, which is an Anti Pattern
*  definition.
*
*  WHY?
*  Let us see.
*/
class Singleton {
  private static instance: Singleton;
  private _value: number;

  /**
  * To avoid creating objects directly using 'new'
  * operator
  *
  * Therefore, the constructor is accessible to class
  * methods only
  */
  private constructor() { }

  /**
  * Defining a Static function, so to directly
  *  make it accessible without creating an Object
  */
  static makeInstance() {
    if (!Singleton.instance) {
      Singleton.instance = new Singleton();
      Singleton.instance._value = 0;
    }
    return Singleton.instance;
  }

  getValue (): number {
    return this._value;
  }

  setValue(score) {
    this._value = score;
  }
  incrementValueByOne(): number {
    return this._value += 1;
  }
}


/**
*  Since the Singleton class's constructor is private, we
*  need to create an instance using the static method
*  makeInstance()
*
*  Let us see what anomalies does that cause.
*
*  Creating an instance using 'new' throw an Error
*  Constructor of class 'Singleton' is private and
*  only accessible within the class declaration
*  const myInstance = new Singleton();
*/

const myInstance1 = Singleton.makeInstance();
const myInstance2 = Singleton.makeInstance();

console.log(myInstance1.getValue()); // OUTPUT: 0
console.log(myInstance2.getValue()); // OUTPUT: 0


myInstance1.incrementValueByOne(); // value = 1
myInstance2.incrementValueByOne(); // value = 2

console.log(myInstance1.getValue()); // OUTPUT: 2
console.log(myInstance2.getValue()); // OUTPUT: 2

/**
* This is the issue Singleton Anti-Pattern
* causing Issue with Singleton Pattern
*/
```

### Tight coupling

- **_Tight-Coupling_**: Excessive coupling/dependency between classes or different separate functionality is a code smell, we need to be very careful about while we are developing or programming.
  We can figure tight-coupling when a method accesses the data of another object more than its own data or some sort of functional chaining scenarios.

```javascript
/**
* A simple example for Tight-Coupling
*/

class Car {

  move() {
    console.log("Car is Moving");
  }

}

class Lorry {

   move(){
      console.log("Lorry is Moving");
   }

}

class Traveller1 {

  Car CarObj = new Car();

  travellerStatus(){
     CarObj.move();
  }

}

class Traveller2 {

  Lorry LorryObj = new Lorry();

  travellerStatus(){
     CarObj.move();
  }

}
```

### Untestabiility

- **_Untestabiility_**: Unit Testing is a very important part of software development where you cross-check and test if the component you built is functioning exactly the way expected. It is always advised to ship a product only after writing test cases. Shipping an untested code/product is very much similar to deploying an application whose behaviour you are not sure about.
  Apart from Unit testing, we have other tests like Integration testing, E2E testing and so on, which are done based on their use cases and necessity.

### Pre-optimizations (Overengineering)

- **_Premature Optimizations_**: Avoid refactoring code if it doesn’t improve readability or performance of the system for no reason.
  Premature optimisation can also be defined as trying to optimizing the code, expecting it to improvise the performance or readability without having much data assuring it and purely weighing upon intuitions.

### Indescriptive convention

- **_Indescriptive Naming_**: Descriptive Naming and Naming Conventions are two important criteria. Most of the times, naming becomes the most painful issue.
  After some time when you or another developer visits the codebase, you would be asking the question ‘What does this variable do?’. We fail to decide what would be the best descriptive name that can be given to a variable, class, class object/instance or function. It is very important to give a descriptive name, for better readability and understandability.

```javascript
/**
 * Example for adding two numbers: Avoid this
 */
function a(a1, a2) {
  // It is less descriptive in nature
  return a1 + a2;
}

console.log(a(1, 2)); // It is less descriptive in nature

/**
 * Example for adding two numbers: Better Approach
 */
function sum(num1, num2) {
  // sum() is descriptive
  return num1 + num2;
}

console.log(sum(1, 2));
// Statement is descriptive in nature
```

### Duplicity

- **_Duplication_**: Sometimes, duplication of code is resultant of copy and paste. Violation of DRY principle causes code-duplication. Always advised not to replicate the code across the codebase, as on longer run causes huge technical debt. Duplication makes code maintenance tedious on a larger scale and longer run.

These flaws were often overlooked knowingly or unknowingly, for which SOLID principles served as the best cure.

So, you wondering now what SOLID principles hold and how does it solve the issues caused due to STUPID postulates. These are programming standards that all developers must understand very well, to create a product/system with good architecture.
SOLID principles can be considered as remedies to the problems caused due to any of the STUPID flaws in your codebase.
Uncle Bob, otherwise known as Robert C Martin, was the Software Engineer and Consultant, who came up with mnemonic acronym SOLID in his book “Clean Coder”. Let’s explore a little more on SOLID principles in detail,

## Pattern fixing Anti-patterns

### Single Responsibility Principle (SRP)

A class, method or function should undertake the responsibility of one functionality. In simpler words, it should carry out only one feature/functionality.

> A class should only have a single responsibility, that is, only changes to one part of the software’s specification should be able to affect the specification of the class.
>
> - Wikipedia

In OOPs paradigm, one class should only serve one purpose. This does not mean that each class should have just one method, but the methods you define inside a class should be related to the responsibility of that class.

Let us look into it using a very basic example,

```javascript
/**
* Here, Class User bundled with functionalities which
* deals with business logic and DB calls defined
* in the same class
*
* STUPID Approach
*/

class User {

constructor() {...}

/**
* These methods deal with some business logic
*/

//Add New User
public addUser(userData:IUser):IUser {...}

//Get User Details Based on userID
public getUser(userId:number):IUser {...}

//Get all user details
public fetchAllUsers():Array<IUser> {...}

//Delete User Based on userID
public removeUser(userId:number):IUser {...}


/**
* These methods deal with Database Calls
*/

//Save User Data in DB
public save(userData:IUser):IUser {...}

//Fetch User Data based on ID
public find(query:any):IUser {...}

//Delete User Based on query
public delete(query:any):IUser {...}

}
```

The problem in the above implementation is that, methods that deals with business logic and related to database calls are coupled together in same class, which violates the **_Single Responsible Principle_**.

The same code can be written ensuring the SRP is not violated, by dividing the responsibilities for dealing business logic and database calls separately, as shown in the below instance

```javascript
/**
*  We will apply the SOLID approach for the
*  previous example and divide the responsibility.
*
* 'S'OLID Approach
*/

/**
* Class UserService deals with the business logic
* related to User flow
*/

class UserService {

constructor() {...}

/**
* These methods deal with some business logic
*/

//Add New User
public addUser(userData:IUser):IUser {...}

//Get User Details Based on userID
public getUser(userId:number):IUser {...}

//Get all user details
public fetchAllUsers():Array<IUser> {...}

//Delete User Based on userID
public removeUser(userId:number):IUser {...}
}


/**
* Class UserRepo deals with the Database Queries/Calls
* of the User flow
*/
class UserRepo {

constructor() {...}

/**
* These methods deal with database queries
*/

//Save User Data in DB
public save(userData:IUser):IUser {...}

//Fetch User Data based on ID
public find(query:any):IUser {...}

//Delete User Based on query
public delete(query:any):IUser {...}

}
```

Here, we are ensuring a specific class solves a specific problem; UserService dealing with business logic and UserRepo dealing with database queries/calls.

### Open-Closed Principle (OCP)

This principle speaks about the flexibility nature of the code you write. As the name stands for itself, the principle states that the solution/code you write should always be **_Open_** for extensions but **_Closed_** for modifications.

> Software entities … should be open for extension, but closed for modification.
> -Wikipedia

To put it up in simpler words, code/program you write for a problem statement, be it a class, methods or functions, should be designed in such that, to change their behaviour, it is not necessary to change their source code/reprogram.

If you get additional functionality, we need to add that additional functionality without changing/reprogramming the existing source code.

```javascript
/**
 * Simple  Notification System Class Example for
 * violating OCP
 *
 * STUPID Approach of Programming
 *
 */

class NotificationSystem {
  // Method used to send notification
  sendNotification = (content: any, user: any, notificationType: any): void => {
    if (notificationType == "email") {
      sendMail(content, user);
    }

    if (notificationType == "pushNotification") {
      sendPushNotification(content, user);
    }

    if (notificationType == "desktopNotification") {
      sendDesktopNotification(content, user);
    }
  };
}
```

The major setback with the above approach is that again if a newer way of sending a notification or combined notifying mechanism is needed, then we need to alter the definition of the **_sendNotification()_**.

This can implemented ensuring the SOLID principle not being violated, as shown below,

```javascript
/**
* Simple Example for Notification System Class
*
* S'O'LID Approach of Programming
*
*/

class NotificationSystem {

    sendMobileNotification() {...}

    sendDesktopNotification() {...}

    sendEmail() {...}

    sendEmailwithMobileNotification() {
      this.sendEmail();
      this.sendMobileNotification()
    }
}
```

As you see in the above example, when you needed another requirement where you had to send both email and mobile notification, all I did was adding another function **_sendEmailwithMobileNotification()_** without changing the implementation of previous existing functions. That’s how simple it is, making an extension of features.

Now, moving on to next important principle, called as **_Liskov Substitution principle_**.

### Liskov Substitution Principle (LSP)

This principle is the trickiest one. **_Liskov Substitution Principle_** was introduced by Barbara Liskov in her paper called **_“Data Abstraction”_**.
By now, you already must have known that this principle has to do with the way we implement Abstraction.

Recalling, what is abstraction/data abstraction? In simplest words, hiding certain details and showing essential features.
Example: Water is composed of Hydrogen and Oxygen, but we see is a liquid matter (Abstraction)

> “Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.”
> -Wikipedia

According to **_LSP_** in the OOP paradigm, child classes should never break the parent class type definition.
To put it in even simpler bits, all subclass/derived class should be substitutable for their base/parent class. If you use base type, you should be able to use subtypes without breaking anything.

<Image 
  src="https://iamskb258154309.files.wordpress.com/2020/10/liskov-substition-principle-565x600-1.png?w=283" 
  alt="Liskov Substitution Principle diagram" 
  caption="Liskov Substitution Principle"
/>

```javascript
/**
* Simple hypothetical example that violates
* Liskov Principle with real-time situation
*
* STUPID Approach
*/

class Car {
  constructor(){...}

  public getEngine():IEngine {...}
  public startEngine():void {...}
  public move():void {...}
  public stopEngine():IEngine {...}
}
/*
* We are extending class Car to class Cycle
*/
class Cycle extends Car {
    constuctor(){...}
    public startCycle() {...}
    public stopCycle() {...}
}
/**
* Since Cycle extends Car;
* startEngine(), stopEngine() methods are also
* available which is incorrect and inaccurate abstraction
*
* How can we fix it?
*/
```

What we can draw from the **_LSP_** violation, causes tight coupling and less flexibility to handle changed requirements. Also, one thing that we take away from the above example and principle is that OOP is not only about **_mapping real-world problems to objects; it is about creating abstractions_**.

```javascript
/**
* Simple hypothetical example that follows the
* Liskov Principle with real-time situation
*
* SO'L'ID approach
*/

class Vehicle {
  constructor(){...}

  public move():void {...}
}

class Car extends Vehicle {
  constructor(){...}

  public getEngine():IEngine {...}
  public startEngine():void {...}
  public move():void {...}
  public stopEngine():IEngine {...}

}

/*
* We are extending class Car to class Cycle
*/
class Cycle extends Car {
    constructor(){...}

    public startCycle() {...}
    public move() {...}
    public stopCycle() {...}
}
/**
* Since class Cycle extends Vehicle;
* move() method is only also available and applicable
* which is precise level of abstraction
*/
```

### Interface Segregation Principle (ISP)

This principle deals with the demerits and issues caused when implementing big interfaces.

> “Many client-specific interfaces are better than one general-purpose interface.”
> -Wikipedia

It states that we should break our interfaces into granular small ones so that they better satisfy the requirements. This is necessary so as to reduce the amount of unused code.

```javascript
/**
*  Simplest Example that violates Interface
*  Segregation Principle
*
*  STUPID Approach
*
*  Interface for Shop that sells dress and shoes
*/

interface ICommodity {
   public updateRate();
   public updateDiscount();

   public addCommodity();
   public deleteCommodity();

   public updateDressColor();
   public updateDressSize();

   public updateSoleType();

}
```

Here we see that, one interface ICommodity is created for the items/commodity in shop; which is incorrect.

```javascript
/**
*  Simplest Example that supports Interface
*  Segregation Principle
*
*  SOL'I'D Approach
*
*  Separate Interfaces for Shop that sells dress and shoes
*/

interface ICommodity {
   public updateRate();
   public updateDiscount();
   public addCommodity();
   public deleteCommodity();
}


interface IDress {
   public updateDressColor();
   public updateDressSize();
}

interface IShoe {
   public updateSoleType();
   public updateShoeSize();
}
```

This principle focuses on dividing the set of actions into smaller parts such that Class executes what is required.

### Dependency Inversion Principle (DIP)

This principle states that we should depend upon abstractions. Abstractions should not be dependent on the implementation. The implementation of our functionality should be dependent on our abstractions.

> One should “depend upon abstractions, [not] concretions.”
> ~ Wikipedia

**_Dependency Injection_** is very much correlated to another term called as Inversion of Control. These two terminologies are can be explained differently in two situations.

1. Based on Framework
2. Based on Non-Framework ( Generalistic )

Based on programming in Framework, Dependency Injection is an application of IoC, i.e., **_Inversion of Control_**. Technically speaking, Inversion of Control is the programming principle, that says invert the control of the program flow.

To put it up in simpler words, the control of a program is inverted, i.e., instead of programmer controlling the flow of the program. IoC is inbuilt with the framework and is a factor that differentiates a framework and library. **_Spring Boot_** is the best example.

<Image 
  src="https://iamskb258154309.files.wordpress.com/2020/10/gr8conf-2015-spring-boot-and-groovy-what-more-do-you-need-24-638.jpg" 
  alt="Spring Boot Inversion of Control" 
  caption="Voila! Spring Boot developers! Inversion of Control made sense!! Didn't it?"
/>

<Note>
  For all Spring Boot developers, just like how annotation take control over
  your program flow
</Note>

Based on the general perspective, we can define IoC as the principle that ensures, “An object does not create other objects on which they rely to do their work”.
Similarly, based on the general perspective, DIP is a subset principle to IoC, that states define interfaces to make it easy to pass in the implementations.

```javascript
/**
* Simple Example for DIP
*
* STUPID Approach
*/

class Logger {
   debug(){...}

   info(){...}
}

class User {
  public log: Logger;

  constructor(private log: Logger){...} // =>Mentioning Type Logger Class

  someBusinessLogic(){...} //uses that this.log
}


/**
* Simple Example for DIP
*
* SOLI'D' Approach
*/

interface ILogger {
  debug();
  info();
  error();
}

class Logger implements ILogger{
   debug(){...}

   info(){...}
}

class User {
 public log: ILogger;

 constructor(private log: ILogger){...}
        //=>Mentioning Type Logger Interface

  someBusinessLogic(){...} //uses that this.log
}
```

If you look into the above examples, the Object creation is dependent on the interface and not on the class.

These are the OOPs Paradigm Programming Principle that makes your code more readable, maintainable and clean.

As a developer, we should avoid trying to write _dirty or STUPID code_. These are the basic things, we need to keep in mind during the development.

**_SOLID_** is no panacea or remedy for all the problems. Some problems in Computer Science can be solved using basic engineering techniques. SOLID is one such technique that helps us maintain healthy codebase and clean software. The benefits of these principles are not immediately apparent but they become noticed and visible over time and during the maintenance phase of the software.

As a developer, it is my suggestion that every time you design or program a solution, ask yourself “Am I violating the SOLID principles?”, if your answer is YES, too long, then you should know that you are doing it wrong.
One thing that I can assure is, these principles are always going to help us write better code.

---

If you like the article, hit the like button, share the article and subscribe to the blog. If you want me to write an article on specific domain/technology I am provisioned in, feel free to drop a mail at [hi [at] ohmyscript [dot] com](mailto:hi@ohmyscript.com)

Stay tuned for my next article.

That’s all for now. Thank you for reading.

Signing off until next time.
Happy Learning.]]></content:encoded>
        <pubDate>Sun, 18 Oct 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Must Know: Software Engineering Principles for Programming]]></title>
        <link>https://ohmyscript.com/blogs/engineering-principles-for-programmers/</link>
        <guid>https://ohmyscript.com/blogs/engineering-principles-for-programmers/</guid>
        <description><![CDATA[A primer on foundational software  engineering principles like DRY, KISS, YAGNI, Separation of Concerns, and clean-code practices.]]></description>
        <content:encoded><![CDATA[# Must Know: Software Engineering Principles for Programming

Hi everyone! This article covers fundamental Engineering Programming Principles that shall help one become a better developer and maintain a clean code.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tv1we6yfvqbicqiavvu8.jpg)

One very important thing that we always need to constantly remind ourselves, is that, the code we write is consumed by another person / developer as well, going ahead. And, please don’t make another person’s life hard, thereby, it is very important to write a code that is easy to understand, neat enough for man not go crazy and not a messed up place for another person to deal with.

Most programmers and developers are constantly in quench to improve themself, by learning a newer stack or learning newer technology, tools and mastering them. But there are some fundamental norms, we often overlook while programming or solving and dealing with a problem statement.

## What makes you a good programmer?

If you ask 10 developers the same question, you will definitely get 10 different answers. Although the answers are put out in different words, they would most probably convey the same idea. For a year now, after being a developer professionally, there have been many things I have learnt which I wish would have been quite handy during my Under-Graduate period to maintain large code base.

`PS: Projects built during my UG period sucks. Fails all the principle I am explaining here`

Speaking from my personal experience and the problems that I have been through, I believe being a good programmer is a skill of understanding a certain problem and coming up with most feasible solution, not for the time being but also serving the best in the longer run. I believe along with staying updated to the newer technologies, these are some fundamental principles that all developers should adhere to:

### 1. Don’t Replicate Yourself (DRY Principle)

As the name suggests, ‘Don’t Replicate yourself’ Principle, otherwise called as DRY Principle, simply suggests us, not to replicate the code across the project or code base.

When writing code, make sure you avoid duplication of the code. This principles simply suggests us to Write it once, Use it Twice.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/1qd6vd4vb27etc25bxzs.png)

In the longer run, duplicated codes will be too difficult to manage and maintain, as newer requirements will arise.

Simple example for the same is shown below, where non-DRY approach is something you can at least imagine, if the chocolates are less than 5. As it size / number of chocolate increases, it would be too hard to manage such code with non-DRY approach.

```javascript
let costofChocolate = [10, 12, 15, 20];

/**
 ** Non - DRY Approach
 ** Suppose you need to add ₹ 2 as tax for each
 **/

costofChocolates[0] = costofChocolate[0] + 2;
costofChocolates[1] = costofChocolate[0] + 2;
costofChocolates[2] = costofChocolate[0] + 2;
costofChocolates[3] = costofChocolate[0] + 2;

/**
 ** DRY Approach
 ** Suppose you need to add ₹ 2 as tax for each
 **/

function addTax(chocolatesCost, taxAmount) {
  for (let i = 0; i < chocolatesCost.length; i++) {
    chocolatesCost[i] = chocolatesCost[i] + taxAmount;
  }
  return chocolatesCost;
}

addTax(costofChocolate, 2);
```

Apart from avoiding duplication, this makes your code more readable and also allows particular functionality available for re-using it in any other component / part in your project. And the biggest pro of DRY is maintainability. If at all there is a bug that you need fix, patch it in a single place, not in multiple spots.

Note:

1. Sometimes, we need to be quite careful about following DRY Principle. Because at times, a pair of code snippets might look similar but with very fine line of difference
2. Avoid premature DRY optimization.

### 2. The Law of Demeter (LoD)

Law of Demeter is a design principle, which otherwise is also called Principle of least Knowledge. This law originally states that

> For all classes C, and for all methods M attached to C, all >objects to which M sends a message must be
>
> M’s argument objects, including the self object or
>
> The instance variable objects of C
>
> (Object created by M, or by functions or methods which M >calls, and objects in global variables are considered as >arguments of M.)

In the initial, when Simula came into market, the first language having features of Object Oriented Principles; objects were simply used as an medium transfer data from one method to the other.

The basic idea behind "Objects” were to transfer data to each other, that is each of them communicated. Now if you read the original law, it simply implies the below general things:

- Objects should only deal with their direct neighbours (neighbours -> method or data)
- Objects should never be dependent on another neighbour
- Objects should only expose the information used by other entities

Let me explain the simple instance;

```javascript
/**
 ** Simple Example of Law of Demeter in JavaScript
 **
 ** Assume an object userObj of the class User
 **
 **/
const userObj = new User();

userObj.getUsers().filterAge(); // Breaches the Law of Demeter

let userList = userObj.getUsers(); // Breaches the Law of Demeter
let filterUsers = userObj.filterAge(); // Does not breach the Law of Demeter

/*
 ** Even while structuring /  formatting the data
 **
 ** User's designation is to be accessed from the variable
 */

user.designation._id; // Breaches
user.designation.designationName; // Breaches

user.designationId; // Does not breach
user.designationName; // Does not breach
```

This law ensures that the systems has decoupled system design.

### 3. KISS (Keep It Simple, Stupid)

I strongly believe that KISS is more meaningful when it is acronym for "Keep It Simple & Smart".

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/bdew5nt2ktk3kc15alg5.gif)

Keep It Simple, Stupid is a great life hack!!!
As the quote goes,

> "Everything should be made as simple as possible not simpler”
>
> - Albert Einstein

The code you write or the design you create as programmer should be simplified. It should be at its maximum simplicity.
Sometimes we come across complicated problem statement or requirements. Most of the times, solution is quite easy and we are not aware of how to deal with it.

Learn the problem statement before you start solving it. Often the solutions are available but we fail to plan the way about how to write the solution; and once we get the solution, hardly care to check if that was THE BEST, OPTIMUM WAY to solve it.

Most minimalistic example, we always fail to follow as we start as a developer,

```javascript
/**
 ** Simple Example of Short Circuit Evaluation in JavaScript
 **
 ** This is first thing we learn in C, C++ or Java when we learn
 ** expressions & operators, yet fail to apply this.
 **
 **
 ** Assuming you want to console a variable; only if the variable username
 ** is defined and not null
 **
 **/

// Breaching the KISS
if (username == undefined || username == null || username == "") {
  console.log("Error");
} else {
  console.log(username);
}

//Does not reach the KISS Principle
console.log(username || "Error");
```

Even Node’s Asynchronous Operation was the best example for KISS principle. Wondering how? Initially we used callbacks to deal with asynchronous functions. To make it easier, Node developers jumped to promises. To have it even more simplified, Node developers finally came up with async / await. Made sense? Of course, one’s who worked in Javascript frameworks or libraries must have understood this ( Pain behind dealing with Callbacks ) 😭 and also must have understood how important KISS principle is ( How easy life was after Async/Await ) 😎

### 4. YAGNI (You Ain’t Gonna Need It)

As developers, we try to think way too ahead and quite too much into the future of the project. Trying to code some extra features based on assumption, "We might need it later” or "We will eventually need them".

And the answer is "YAGNI – You Ain’t Gonna Need it"; design and develop what is needed and avoid the unwanted or simply foreseen requirements and features.

Every developer must have been through this phase, I, myself have committed this mistake. I had developed other extra features which weren’t asked, assuming those might be useful in future, but in the end, the Final System which the client wanted was totally different from what I had foreseen.

Why YAGNI ?
Chances are that you won’t be needing it at all in the future and you will be wasting time. If you are working in an Agile or Incremental Model of Software Development, you do not get the complete requirements in one-go. Avoid adding bloats to your project.
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/dwrxl4wr5ggneg9rg327.png)

#### Build the what’s needed! Don’t be a wizard

Simply put, Live in the present, not in the future; ensuring you are prepared for the future.
I would just give an simple example, might sound little vague, but you can relate.

```javascript
/**
 ** For the first iteration requirement was to design a simple React Web - ** App to manage and view meetings
 **
 ** A backend developer, builds the requirements and then spends adequate
 ** amount of time on creating a socket for adding real-time notification
 ** based on his assumptions that it would be needed for Mobile App in
 ** near future.
 **
 ** In the second iteration, they finalize that project is confined to only
 ** as Web-App and there is no scope for Mobile App for this at all.
 **
 **
 ** What's the whole point of investing so much time and implementing it
 ** when it was not asked in the first place?
 **
 **/
```

### 5. SoC ( Separation of Concern )

Major and one of the most fundamental principle that we always fail to achieve as a developer or as a human, is Separation of Concern.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/njn1e1ijokiywi2i9ki2.jpg)

Look at the how messed up this looks?
Imagine how your code base would look, if you don’t separate them by their concerns

As developers, we often make a simple mistake of bundling too many things into a single class/function. We design functionalities in a way, where we want to "do all the things” with one function, class or object. This approach of designing a solution for a problem is incorrect and going to be quite tedious to maintain in longer run.

> To Do a Great Big Thing, Break It Into Tiny Things
>
> - Anonymous

Always maintain High Level of Abstraction; simplest example would be MVP design(Model View Presenter Design); where the design is divided into three parts model deals with the data, another Presenter which deals with the user interface or what user views.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4xc6irkfx9tjqn65xpir.jpg)
Separation of Concern : The Nurse and The Doctor

As the above example, the responsibilities of the doctor and nurse is distinctive, separate and defined and hence is easier to manage and maintain for each individual.

Another simple example would as follows,

<ImageRow>
  <ImageCol
    src="https://iamskb258154309.files.wordpress.com/2020/09/carbon.png"
    alt="HTML code with CSS link"
    caption="HTML code with CSS link"
  />
  <ImageCol
    src="https://iamskb258154309.files.wordpress.com/2020/09/carbon-1.png"
    alt="Separated HTML and external CSS"
    caption="External CSS file"
  />
</ImageRow>

The above example shows how we have separated the style and HTML content; basically externalizing the CSS file.

### 6. Boy-Scout Rule ( Refactoring )

> If you have been part of the School Boy Scouts,
> you must be aware of the simple rule that states,
> "Leave the campground cleaner than you found it".

This particular rule can be applied to Software Development as well. When implementing new features or working on legacy code, one thing we fail to ensure is how it affects the existing the quality of the code.

We do not look for the technical debt in the existing code, instead end up building new features on top of it. This will eventually end up toppling the complete system and breaking the code at some point, which is one thing you definitely do not want to happen.

Refactoring is the key. Refactoring simply means Changing the structure without changing its implementation or end result.

Simplest Example:

Headphones was refactored to Earphones : Easy to carry and Less cost

Similarly we should refactor our code base for better understanding, readability and easy maintenance and also maybe to improve the efficiency and optimize the execution.

```javascript
/**
 ** Before Refactoring
 **/

function getAddress(latitude, longitude) {}
function getCountry(latitude, longitude) {}
function getCity(latitude, longitude) {}

/**
 ** After Refactoring ::
 ** Better readability and maintain function-arity (<3-4 No. of Arguments)
 **/
function getAddress(coordinates) {}
function getCountry(coordinates) {}
function getCity(coordinates) {}
```

Note :
Avoid Unwanted Optimization / Refactoring

### 7. TDA ( Tell Don’t Ask )

Tell Don’t Ask is basic principle that reminds people that Object-Orientation is about encapsulating the data with methods that deals with data. Confusing?

When you want to access a data from a class, never access it using the object, instead through a method asking for that data, in a simpler way a getter/setter as you all have heard of.

TDA suggests that it is always better to perform some operation than directly accessing the data.

Simple example for TDA would be as follows,

```javascript
/**
 ** Non TDA Approach
 **/

class User {
  constructor(name, age) {
    this.name = name;
    this.age = age;
  }
}

const userObj = new User("OhMyScript", "22");
console.log(userObj.name); // Breaches TDA
console.log(userObj.age); // Breaches TDA

/**
 ** TDA Approach
 **/

class User {
  constructor(name, age) {
    this.name = name;
    this.age = age;
  }

  getName() {
    return this.name;
  }

  getAge() {
    return this.age;
  }
}

const userObj = new User("OhMyScript", "22");

console.log(userObj.getName()); // Does not breach TDA
console.log(userObj.getAge()); // Does not breach TDA
```

### 8. P^3 ( P-Cube Principle )

This is not programming principle but general developer principle that I firmly believe in and only thing that help you be proficient in all the above principles. Practice-Practice-Practice makes a man perfect.

![](https://iamskb258154309.files.wordpress.com/2020/09/il_340x270.1819894240_1lpq.jpg)

## With Great Experience comes Great Knowledge

With experience, your standards will just get better and better

These principles are not something you can learn and apply on. It is very much similar to what we hear about old wine.

These were some of the most important basic principles that play a big role in your journey as developer. I am pretty sure there might many more principles I might have missed upon.

Those of who know about SOLID principles, please stay tuned for the next article. SOLID principles are one of the very important design principles when it comes to Object Oriented Programming. I have decided to dedicate a separate article for that.

If you like the article, hit the like button, share the article and subscribe the blog. If you want me to write article on specific domain / technology I am provisioned in, feel free to drop a mail at [hi [at] ohmyscript [dot] com](mailto:hi@ohmyscript.com)

Stay tuned for my next article on SOLID Programming Principles.

Do subscribe my blog [OhMyScript](https://ohmyscript.com) for such related articles. Stay tuned for more.

That’s all for now. Thank you for reading.

Signing off until next time.
Happy Learning.]]></content:encoded>
        <pubDate>Sun, 06 Sep 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Javascript’s setTimeout can sort an array, how?]]></title>
        <link>https://ohmyscript.com/blogs/settimeout-tidbit/</link>
        <guid>https://ohmyscript.com/blogs/settimeout-tidbit/</guid>
        <description><![CDATA[A playful demo of using JavaScript’s setTimeout to log array items in order as a quirky sort-mechanism.]]></description>
        <content:encoded><![CDATA[# Javascript’s setTimeout can sort an array, how?

`setTimeout` function of the Web API's with Javascript; can sort an array.

Example:

```javascript
[111, 20, 13, 4, 544, 32, 12, 414, 123].forEach((item) => {
  setTimeout(() => console.log(item), item);
});
```

The same works for negative as well,
Example:

```javascript
let arr = [10, -100, 650, -25, 5, -50];

const min = -Math.min(...arr);

arr.forEach((item) => {
  setTimeout(() => console.log(item), item + min);
});
```

Any guesses how?
Of course, not an efficient way to do it.

Clue: Call-Stack and Web API's

Reference: https://stackoverflow.com/questions/52679851/javascript-settimeout-and-array/52680067]]></content:encoded>
        <pubDate>Tue, 11 Aug 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Programming: Human Philosophy]]></title>
        <link>https://ohmyscript.com/blogs/programming-philosophy/</link>
        <guid>https://ohmyscript.com/blogs/programming-philosophy/</guid>
        <description><![CDATA[Why programming is fundamentally a human activity; thinking, decision-making and problem solving seen as philosophy.]]></description>
        <content:encoded><![CDATA[# Programming : Human Philosophy

This article is just about describing how I perceive programming. The main purpose to write this article, is a constant stigma that I have come across that being programmer is a big deal, programming is hard to learn and understand.

Also, from my personal experience, there have been numeral scenarios when people actually asked me,

<Question>
  - Do you need to be a CS graduate to become a programmer? - Why is programming
  difficult? - How do you learn it? - so on and forth...
</Question>

## Programming, a philosophy

Programming is a simple daily process that we do as humans. On day to day activities that we do, there are several situations that we come across scenarios that requires the skills needed to become programmer.

<Image
  src="https://iamskb258154309.files.wordpress.com/2020/07/f63e2d1583bd614f8ca26a11de97f907.gif?w=405"
  caption="A problem? Oh! I can solve this."
  alt="problem-solver"
/>

## Philosophy I

> "Programming is art of thinking, decision making and problem solving"

In simple way to put it up, **Programming** is a simple process of thinking, decision making and execution. The root of programming starts from you. The way you **think, process** and **react** to a certain situation to overcome that, is the simple philosophy behind programming.

I would like to take a simple instance to describe the above philosophy.
Assuming you are a coffeeholic person; and badly want to prepare a cup of instant coffee for yourself. What would you do?

<Image
  src="https://iamskb258154309.files.wordpress.com/2020/07/tenor.gif?w=300"
  alt="Coffee gif"
  caption="Yes, I want to prepare a coffee for myself"
/>

Let me put it down here as sequence of steps :

Wash the vessels that you want to use.

1.  Take required amount of milk in vessel.
2.  Heat up a cup of milk in the vessel.
3.  Add 1-2 teaspoons of instant coffee powder to vessel.
4.  Add required amount of Sugar required.
5.  Stir well until sugar dissolves.
6.  Serve yourself with Coffee in a Coffee Mug.

So, if you have ever done this, you are already a programmer. Wondering how? This was a simple process of making a coffee for yourself. But then there was a lot of things that you handled to make the coffee.

Let us re-collect ; once you decided to drink coffee, you washed the vessel required along with adding required amount of milk.
Also, added coffee powder and sugar as well, and stirred it well until the sugar dissolved.

**You might wonder, How and why does this make you a programmer already?**

```markdown
Philosophy 1:

- Programming is an art of Thinking,
- Problem Solving, Decision Making
- and Executing to resolve the issue.
```

_If you take the the above instance, you wanted a coffee, was the problem statement;
how would you make a coffee was a critical part of your thinking and thought process.
Upon deciding to make a coffee, the decision you take up add sugar or coffee powder
not to make it too sweet or bitter is part of making decisions.
Finally, complete process from washing vessels to making coffee was the
execution from your part._

---

This was a simple example, and there are several such scenarios that you deal with, on a daily basis which needs a little amount of critical thinking, problem solving, decision making and this whole process is what defines the philosophy behind the art of Programming.

## Philosophy II

> "Programming is a science of communicating with a Machine"

Now, thinking of the philosophy behind programming, technicality wise, programming is a way to **communicate**.
Communicate? How?

<Image
  src="https://lh6.googleusercontent.com/
oWlWg8MfPgCO_KvWdAr0ejXrqgDWDbBw0O7ns2egRb-2ZVST4P459cFsH6R9giNYmghMN5QOW_bDuNkPHPIp5KOoy4VC244c_eQ9JQmz0zIb8YJ3SpTCzeNgO_QshE54"
  alt="Human and machine communication"
  caption="Human-Machine Communication"
/>

Programming is a medium to communicate with the Machine. In another words, Programming is a technique how we can talk to machine, hence, making the machine do the thinking, decision-making and execution for us.

![Machine thinking](https://iamskb258154309.files.wordpress.com/2020/07/tenor-1.gif?w=498)

Taking the same instance as above, for preparing a decaffeinated coffee, now with a Coffee machine. How would a machine deal with it assisted partially by us?

Let us chart it down:

1. Add water reservoir of the Coffee Machine with water.
2. Add the coffee filters to it.
3. Click the switch on the machine to prepare the coffee.
4. Collect the decaf and add sugar.
5. Stir well.

```text
Philosophy 2:
Programming is a science of telling the machine how to
    - ingest,
    - process and
    - store that data, thereby, resolving the issue.
```

If you take the above instance with Coffee Machine into the picture, you wanted
a coffee prepared with Coffee Maker, was the problem statement; where machine
would prepare the coffee/decaf for you.

How the machine was designed to understand the problem and correspondingly process it.

Upon processing it, sequential execution to provide an end product decaf to consume, was possible as the system/machine was programmed to do so.

---

## Can humans communicate to Machine?

The part where programming plays very important role, is, we write instructions that machine will follow. Machines are very literal; they will take our instructions as laid and follow them same way.

Here comes Programming languages into play. You must be quite familiar with names like **COBOL, C++, C, Pascal, Python, Java** so on.

But there’s an big problem here!!

<Image
  src="https://iamskb258154309.files.wordpress.com/2020/07/captured-1.png"
  alt="Well no"
  caption="Well, no!!!!!"
/>

Machines understand Machine-level languages, which is otherwise called as binary language, basically the complete representation of the instructions are in bits, i.e., 0’s and 1’s.

<Image
  src="https://iamskb258154309.files.wordpress.com/2020/07/image.png"
  alt="Machine language"
  caption="Basically how Coffee-Machine Instruction would look like in Machine Language"
/>

For a human to write such instructions/programs would be a very tedious work. Hence to eliminate this trouble, they come up with an idea of High Level Languages. High level languages are programming languages to interact with the Machine, These languages were pretty much closer to human language; and not having the tedious work of dealing with bits like you do with Machine Languages.

Every programming language have some set of grammatical rules called Syntax that we need to follow, no matter what.
Just like how there is certain grammatical rules we should follow and take care of, no matter whether you are speaking French, German or English.

One major factor that plays an important role amidst this, is compiler/interpreter.

<Image
  src="https://iamskb258154309.files.wordpress.com/2020/07/unnamed.jpg"
  alt="Compiler"
  caption="Translator Software is called as Compiler, converting High Level Code to Machine Level Code"
/>

Translator Software is called as Compiler, converting High Level Code to Machine Level Code
Compiler is a simple system software that is responsible for your High Level Programming instructions to be translated to Machine Level Instructions for the Machine to execute.

---

Coming back, Programming, at the prime, is taking a big problem and breaking them down to compact and smaller problems until they are small enough that we can tell the machine to resolve that for us.

Those are my ideologies about programming. I believe that programming is a very fundamental thing that every person does on day to day basis.

From waking up early morning and deciding what to do next, to going off to sleep in the night, there are several scenarios where you play the role of Programmer dealing with problems, solving them and making things happen and work. Also, I believe that programming should be taught to us from our elementary, because Programming means “We are thinking, making decisions, learning and most importantly letting our brain actively execute“.

If you like the article, hit the like button, share the article and subscribe the blog.
If you want me to write article on specific domain / technology I am provisioned in, feel free to drop a mail at [hi [at] ohmyscript [dot] com](mailto:hi@ohmyscript.com)

Stay tuned for my next article on **The Programming Principles**.

That's all for now. Thank you for reading.

Signing off until next time.
Happy Learning.]]></content:encoded>
        <pubDate>Sat, 25 Jul 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Version Control System: Get a Bit Git Culture!]]></title>
        <link>https://ohmyscript.com/blogs/version-control-system/</link>
        <guid>https://ohmyscript.com/blogs/version-control-system/</guid>
        <description><![CDATA[A beginner-friendly overview of version control systems (VCS), Git, GitHub and core VCS concepts.]]></description>
        <content:encoded><![CDATA[# Version Control System: Get a Bit "Git" Culture!!!

This post is regarding Version Control System, with explanation of Basic Terminologies of the Versioning System.

![captionless image](https://miro.medium.com/v2/resize:fit:936/format:webp/1*QDejJdxA0oRhqNKKMJMYTg.jpeg)

Git is one of the most powerful tools, I feel I have used during my past learnings in the field of programming. It is one open-source tool that I would recommend any person willing to be in programming or non-programming background profession, to learn. People in the programming field should definitely master the use of this software tool.

Perks of learning and using Git is infinitely useful and invaluable.

In this article, we will be talking a bit about,

- Version Control System
- Git
- Github

<Note>
Other things like How Git works and git commands and so on.

Are you guys ready? Do drop your reviews as well. Since I am a beginner, your reviews would mean a lot. If there’s anything wrong in the below article, please feel free to drop a comment.

</Note>

## Version Control System

Talking about, **_Version Control System (VCS)_** is a tool that is used to keep track and record the changes in the files, by recording/tracking the modification done to that code/any sort of file. It contains all the edits, changes and historical versions as snapshots. It basically doesn’t preserve the complete file.
Instead, it preserves the image of the file at that point of time when committed.

![Aye, Aye! Developers will definitely understand!](https://miro.medium.com/v2/resize:fit:998/format:webp/1*F6EXpDg4-Vx2qFTbcZZifA.jpeg)

## CVCS vs DVCS

There are two types of VCS, namely,

- _**Centralized VCS**_
- _**Distributed VCS**_

> **_Note:_** Two terms, you will be reading more often, Repositories, Local Repo and Central Repo.
> **_Repo_** is a central place where some sort of data is stored and can be found.

Now, going on with **_Central VCS_** is sort of a system that only contains one repo and each user gets their own working directory. Whatever you commit in your working directory, reflects directly to your central repo.
So basically,

- You change something.
- You commit.
- It reflects your Central Repo.
- These changes reflected in the other User’s system as well.

Whereas, in **_Distributed VCS,_** it works sort of differently in this type. Here, every user has their own **_working directory_**, i.e., the folder you are working/project exists and owns a **_repository_** called as a **_local repo._** And another repo called in your central system, called **_Central Repo/Remote Repo._**

![CVCS v/s DVCS: Which is better??](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*g2yeS33ARslJ9Jg8uZGULQ.png)

Whatever you change, on commit from your working directory will first reflect your local repo. And only once you **_push_** the content from the local repo, it will reflect the Central repo. And that won’t affect another user’s repo, as he needs to **_pull_** that content to have the same content in that user’s local repo and working directory.
So basically,

- You commit the code to your local repo.
- You push the content to your central repo from the local repo.
- The Central repo is updated.
- Other person pulls the code to reflect the same in their local repo.
- Other person’s repo updated.

![This is how Git works from your currently working directory to Remote repo and back](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*Ju-X7VzkCnYT7UngdT26Qw.png)

I hope this gives you an ample amount of idea about how the **_Version Control System_** works.

![captionless image](https://miro.medium.com/v2/resize:fit:654/format:webp/1*z4Ct6r8hrL-JQ5i9jaWr0A.png)

Jumping into the mains, What are the **_Git_** and **_Github_**?
People often misunderstand and have very wrong misconceptions that **_Git_** and **_GitHub_** are one and the same. They are related but are different.

## Git

**_Git_** is a VCS tool that runs in your system and keeps track of versions of your code/content in your local repo and central repo changes as well on the pull.
Whereas, **_GitHub_** is the service for projects that use Git. In simple terms, GitHub is like a cloud storage system, that preserves the code of different versions; a hosting service for Git repos.

Some alternatives for **_Github_** are:

- GitLab
- BitBucket
- GitBucket
- SourceForge,... so on.

Now jumping into simple usage of Git repo in your local with commands and basic terminologies.

## Commit

What’s _a_ **_commit_**?
A **_commit_** is a collection of content, that records changes to the local repo. It creates a new commit containing the new changes/additions.
Example: $git commit -m "some message in some format"

## Master

What’s a **_Master_**?
The default name for the first branch; always reflected for Deployment; expected to hold the stable code only.

## Branch

What’s a **_Branch_**?
The pointer to a commit. Basically, if you develop a feature, that can be created in a different branch; since better approach of working with git and also ensures a commit/backup of the code version before this feature is implemented, just in case of some issues.

## Push

What’s a **_Push_**?
Updates a remote branch with the commits made to the current branch in the local branch; in simplest words, pushing the changes in local to the central repo.

## Merge

What’s a **_Merge_**?
Taking the changes in one branch and adding them into another branch; usually, the branch that holds the base code. The new commits are usually requested using via "**_Pull Request_**" before merging them.

_For instance_, say the master branch has content ‘XY’.
Now you and your fellow friend have been asked to develop **feature ‘A’ and ‘B’** respectively.
So, you create a branch ‘**featureA**’ and implement the code with existing base code ‘XY’, ending up with **‘XYA’** code in ‘**featureA’** branch.
Now on pull request and merge, with base Master Branch, it will also have code ‘XYA’ code.

## Pull Request

Now, wondering "**_Pull Request_**" is?
If someone has changed code on a separate branch of the project and wants to review the code before adding it to the branch having stable code. Then you can put the reviewer in a pull request. Pull requests ask the repo collaborators to review the commits and merge the upstream changes. **_Pull Requests_** happens on the **_GitHub._**

## Clone

What is "**_Clone_**"?
A clone is a process of downloading a complete repo onto your system. Downloads the whole project folder as new content onto the local system.

## Pull

What is "**_Pull_**"?
A "**_Pull_**" is used to receive data from Github. It basically fetches and merges the changes on the central repo to your local repo and working directory. **_Pull_** happens on the **_Git (local repo)._**

There are so many other jargons related to Git and Github. I wonder, for beginners, these are few things that I know and I hope it helped you get an idea about how Git/VCS works.

There is much more to learn and explore about Git and Github. You can explore several other things. If you want me to write another article about the Git strategies and also about another article about Git modules like Merging, Rebasing, Resetting, Reverting and so on. Drop a comment and share your views.

## Visualize and play with Git

Utilize the [Visualize Git](https://git-school.github.io/visualizing-git/) tool for seeking more insights.

Please play with Git and Github. That is how you learn about Git’s supreme usage.]]></content:encoded>
        <pubDate>Wed, 15 Jul 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Object-Oriented Programming Concepts (OOPs) Simplified]]></title>
        <link>https://ohmyscript.com/blogs/oops-simplified/</link>
        <guid>https://ohmyscript.com/blogs/oops-simplified/</guid>
        <description><![CDATA[real-world explanation of object-oriented programming.]]></description>
        <content:encoded><![CDATA[# Object-Oriented Programming Concepts (OOPs) Simplified!


In this article, I would like to introduce a beautiful programming paradigm that solves a lot of real-life problems in terms of programming.

# ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/8ch1sn08hzxw40v6zssd.png)

Let us look into what is OOPs, the ideology behind OOPs and also main concepts of OOPs. And the complete article would be described in Non Programming terms, i.e., in layman’s terms.

**_Note:_** Once you are done reading the article, and understand, learning the OOPs Languages would be easy. It would give better understandability to learn any OOPs languages.

## What is OOPs?

Object-Oriented Programming is a programming paradigm/ way of writing a solution for a given problem statement.

In a simpler way, it is a way of writing computer programs that are using the idea of “**_Objects_**” as a reference to represent the data and methods.

**_Instance:_** Take a scenario, Building a car. Building a car has a lot of things that are to be taken care of.

1.  Drawing out a blueprint of how the car should look like.
2.  Things that are required to build a car.
3.  Things that should be accessible to the one who builds it.
4.  Things that are accessible to one who drives it.
5.  How things attached to it should function
6.  How different things help function the other things

Here **_things are the data_** and **_methods are functionality associated with that data_**. Object-Oriented Paradigm basically is an idea of binding both the data and functionality for the simplicity of finding real-world problems.

Basically, some real-world problems are efficiently solved when taken Object-Oriented Approach, say in the above instance, Building the car was the problem statement.

How we go about it, can be understood as we go ahead.

You might have read the term “**_Object_**” above. Wondering, what is an **_Object_**?

An **_Object_** is the **_basic fundamental unit of Object-Oriented Programming_**. It’s a unit formed by the data and the methods(things and how things function) which is built from a **_blueprint_**.

Taking the same instance as above, we can say, **_Car is an Object._** It holds all the things and functions together to have the behavior of a vehicle.

Talking about Blueprint, reminds of another important term, “**_Class_**”, which basically is a blueprint for creating objects.

For instance, for a Car Blueprint, we take care of essential features like fuel, type of fuel, the engine to be used, design how it should look, on Ignition how the vehicle should function.. so on.

It’s possible to produce cars in series using the blueprint, without re-building the machine from scratch.

This basically gives a clear image of what **_OOPs_** is all
about.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/9i6pgmxua4s9hvp991r1.PNG)

Now, taking note of the major features of OOPs concepts:

- Abstraction
- Encapsulation
- Inheritance
- Polymorphism

## Abstraction

What do you mean by “**_Abstraction_**”?

It simply means showing the essential features of a module, hiding the details of internal functionality.

Example: A driver that uses the accelerator, doesn’t need to know how it precisely works at the Mechanic level. He only knows that if he presses the accelerator, the car accelerates.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hzlj0ltznczreyklx388.png)

Water is composed of Hydrogen and Oxygen, but we see is liquid water (Abstraction)

Now jumping into another term, “**_Encapsulation_**”,

It is a method of wrapping up the things (data) and in a way, it can function to do something productive(methods). Basically, putting together the data and methods together to classify them.In other words, we can also define it as, way of exposing a solution to a problem without requiring to fully understand the problem domain.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/8g2hfzp48qvwfr88a80e.png)

Class/Interface is one of the features that help implement Encapsulation at the programming level.

Example: In the Car designing/Integrating team, the members working on the Lighting system of the car don’t need to know how the Brake System in the car works, as simple as that.
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/9ihigfkhncsvrzuphuqd.jpg)

## Inheritance

Now, explaining the term “**_Inheritance_**”

“**_Inheritance_**”, as the word suggests is a way to inherit some features(methods) or things(data) from a parent to its child. This is an important feature in OOPs, which basically deals allows you re-define the model.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/7f8vmu270oca4ootapfg.gif)

Another Example: There is a Car X and are planning to release a Model A and Model B for Car X.  
Now Model A is already designed and implemented. Model B is almost the same except for the Tyre and Disc Brake System.  
So, here what happens is, Model B can take the design of the Model A and alter as per their requirement for Tyre and Disc Brake System.  
Made sense? They don't have to redesign it from scratch. Instead, they inherit the things and functionality from Model A.

## Polymorphism

Finally, talking of “**_Polymorphism_**”

**_Polymorphism_** is a feature that defines the different implementation of the same objects.

To put in simple words, “**_Polymorphism_”** is a feature that enables to define the same functionality in different forms.

Simple silly but effective example would be using 'Cut' as a polymorphic word;

- Surgeon would begin to make an incision
- Hair Stylist would begin to cut someone's
  hair
- Actor would abruptly stop acting of the
  current scene

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jkt0xrsigy5hyauxq6fy.jpg)

Another Example: Car X has Model A and Model B, where both need fuels to run the vehicle.

Say, Car A-Model A uses petrol and other uses diesel as its fuel. Hence, the design of Automotive Engine design varies though they perform the same functionality.

This is how simple OOPs concepts are. I hope you got at least little clarity on what OOPs is and what their features are.

This basically speaks OOPs in general terms.

If you really want me to write down the explanation of the OOPs in the programming point of view.

Drop comments.
This is a revised version of [my Medium Post](https://shravan20.medium.com/object-oriented-programming-concepts-oops-simplified-6a45f3de81ad)

Stay Tuned for more posts.
Connect me [mrshravankumarb@gmail.com](mrshravankumarb@gmail.com)]]></content:encoded>
        <pubDate>Sun, 21 Jun 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[API with Deno :: Antidote for Node]]></title>
        <link>https://ohmyscript.com/blogs/api-with-deno/</link>
        <guid>https://ohmyscript.com/blogs/api-with-deno/</guid>
        <description><![CDATA[building a RESTful API using Deno, Oak framework and MongoDB]]></description>
        <content:encoded><![CDATA[# API with Deno :: Antidote for Node

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/l1zokutl8n2hbal0wf3u.png)

Since Deno 1.0 released.

<Warning>
  To start with, the framework I am currently using OAK/Snowlight is to write a
  RESTful API which is currently stable Deno version 1.0.0
</Warning>

There are several speculations and hypothesis amongst Developers as follows saying,

> Is Deno going to replace Node?
> Is Deno better than Node?
> Is all the amount of efforts, time and energy put on learning Node completely pointless?

---

### As Ryan Dahl claimed in JSConf in his talk 10 Things I Regret About Node.js

> “ Node could have been much nicer ”

When Ryan started out Node; he missed out on very essential aspects which he recalled in his speech delivered in 10 Things I Regret About Node.js

To summarize those design flaws in the Node.js that Ryan had mentioned are as follows;

- **Security**: Node has no security. Since you use NPM packages and you don’t completely know what’s in that code; maybe these codes may have unfound loopholes or severe vulnerabilities making network attacks or hacks easier. Easy access to the computer was very much open.
  Deno overcame this security concern by putting the Deno environment in a sandbox, by default, where each operation beyond the execution context was permitted explicitly by the user.

- **Importing URLs**: One major change was requiring modules from the folder node_modules where Node.js used syntax omitting the extensions in files, caused issues with the browser standards. To resolve this issue, they had bundle up the Module Resolution Algorithm to find the requested module in Node.
  To overcome this, Deno came up with the use of import instead of the require. You did not have the packages locally; instead, you could fill it in with URL from which you need the module from. This throws the light on another aspect; explained in the next point.

- **The unnecessary need of node_modules**: With Node, NPM packages had too many codebases whose vulnerabilities were not surely known. Apart from that, every time you need to use a module from node_modules; you had require it; which would have to again run Module Resolution Algorithm which itself is quite complex.
  In Deno, there was no need for the node_modules folder. Modules are imported using URLs; which are cached and used for the project you are executing available globally. This might make you wonder; does it always need an internet connection to run?
  Well, no. When packages are initially imported; they are downloaded and cached, just like how it works with NPM. They are cached in a folder

> _On Linux_ : $XDG\*CACHE_HOME/deno or $HOME/.cache/deno

> \_On Windows: %LOCALAPPDATA%/deno (%LOCALAPPDATA% = FOLDERID_LocalAppData)

> Another file called .deno_plugins is also generated in the project’s directory itself in the first run when you use certain packages like deno_mongo to install the rust binaries of the same.

- **package.json**: With the above two major drawbacks; maintaining package.json was an unnecessary abstraction. Semantic Versioning was one of the main purposes that package.json served.
  On the contrary, Deno does not support the use of a package manager like npm. Hence, the need for Semantic Versioning is eliminated, eliminating the need for package.json like the manifest.

- **Handling Asynchronous Operations**: In Node, the initial evolution of handling asynchronous operations was, using Callbacks Pattern. As time passed, it evolved using the Promise API in the early versions of v8; which were included in late 2009 and removed in early 2010. There was an outbreak since by then, there were several packages/libraries which used Callback patterns for async operations. Node was designed much before Javascript had Callbacks / Promises API.
  In Deno, the most fundamental or let us say lowest level binding Promises API is “ops” binding to handle Asynchronous Operations.

- **Out of the box, TypeScript Compiler Built in**: Node supports JavaScript scripts, with .js files. If we had to write TypeScript in Node Environment; we had to set up the TypeScript Configuration for the project along with the TypeScript package.
  This pain of set up is over with Deno, which gives right away without the initial configuration of the application. The use is confined to default configurations of the Deno’s TypeScript Compiler. Anyhow, if your want to override the default configuration, you can add the ‘tsconfig.json’ file; using flag ‘- -config=tsconfig.json’.
  Normal JS also works too with Deno; basically even files with .js extensions.

- **Lastly, the usage of the await supported by v8 - Top level Async**: Node supported the async-await pattern of handling asynchronous operation after the release of ES5/ES6. If you are defining a function that does some asynchronous operation, then you will have to use this standard pattern of async-await.
  Deno had the awesome feature of using await directly since it was binded directly to the promises. In simpler terms, you could use ‘await’ without using the async keyword in a program.

---

![alt text](https://dev-to-uploads.s3.amazonaws.com/i/c4sobg6zbt6nj6xrwm0s.gif "Oh yes! Now, we know the design flaws in Node And Why Deno")

_With these flaws around, and each of them being handled in Deno; Deno looks quite promising.
Yet need to see how this environment and frameworks built on Deno, based on their adoption rate and flexibility, will see how Deno turns the industry around._

---

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/r1ge094qacay00wrnlzo.gif)

In this article, I will be discussing about an Application Server setup using Oak Framework connected to MongoDB database using deno_mongo native Deno Driver.

Let us dig into Deno and then start with Creating a RESTful API using Deno [ Oak Framework, Inspired by Koa Framework ].

## What is this `Deno`?

- Simple, Modern & Secure Runtime for JavaScript and TypeScript that uses v8 engine built using Rust.
- Recently in May 2020, v1.0.0 of Deno was out officially.
- Deno is built with Rust in the core.
- Supports TypeScript without explicit setup.
- Not compatible with node modules and npm

Further details can be found in the official [Deno v1](https://deno.land/).

**Now starting with creating simple RESTful API using Deno’s framework called Oak.**

In this article, we will be creating Application Server using

**Oak**: A Middleware Framework for Deno’s HTTP server; inspired by Koa Framework.

**deno_mongo**: It is a MongoDB database driver built for the Deno Platform. A native database driver for MongoDB.

## Getting started

Before starting to build the application, this is a simple application to build an Application Server, to create a user and fetch user details.

Below given is the folder structure of the mini-project as follows

- _models_ contains the model definition, in our case only User Interface

- _routers_ contains API routes to handle API Requests
  controllers will be holding the files that deal with validation of the data, whatever that is been sent from the frontend.

- _services_ contain all the business logic of the API routes.

- _repository_ contains the files that deal with all the queries related to the database.

- _middlewares_ contains the files that have different route-level middlewares

- _helpers_ contains files that deal with some sort of helper functions

- _keys_ contain files that store the .json/.js/.ts file to store constant values or key values

- _.deno_plugins_ upon first execution; this folder is generated, just cached version of the libraries or modules that imported in the codebase.

- _app.ts_ is the entry point of the applications

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vddwo3wuug3p8hiewb8r.png)

---

Starting with an 'app.ts' file.

<Gist id="zhravan/63ab0a6da0380e540f09e84be1881dc5" />

This is app.ts file; starting point.

---

Now we have a routers folder, which has a collection of routes related to the same service.
Here let us say, User as an independent service.

Now let us create a router for User which has HTTP methods
POST → ‘/user’
GET → ‘/user/:id’

To add a user along with getting user data as well. The route file would like this as follows:
Create a 'routers' folder and create another file 'userRoute.js'. This file deals only with routing to the user service.

<Gist id="zhravan/127d3d2cee641f32292f8e15ba1a34de" />

This is userRoute.ts file;

---

Next, create another folder controllers having a file userController.js, which completely deals with handling Success Response and Error Response and apart from which usually deals with Data Validation.

<Gist id="zhravan/a5eee8e7875682f2548005ed56bb60e9" />

This is userController.ts file;

---

Following up create services folder having another file userServices.ts which completely handles the business logic of the API.

<Gist id="zhravan/1d16dc3a4fa6cdfc625dd4daf9c26309" />

This is userServices.ts file; having business logic.

---

Finally, comes the Repository layer which deals with database queries. Basically following DRY ( Do not Repeat Yourself ); write those queries once in Repository Layer and can be called multiple times as required.

<Gist id="zhravan/4d064eb50614d9938f111c3f7e75f2e6" />

This is userDAL.ts file

---

Following we create a class ‘database’ for the database connectivity whose object we can use to create an instance of, to write queries.

Create a database folder, with a file 'config.ts', which looks like as follows,

<Gist id="zhravan/863be8906ac67a628e73c55ec1549198" />

This is the config.ts file; dealing with all the database connectivity code.

---

Finally creating a User Interface, a model for user database; since we do not have an ORM currently for Deno; creating an Interface;

In a model folder, create a file userInterface.ts;

<Gist id="zhravan/2667cfdcc218172bc2ad1b12f8dd32e4" />

This is userModel.ts; having User Interface.

---

---

These are the fundamentals requirements needed to run the Oak Framework based Server Application.

Along with this, there are other pieces of code snippets that will be required to run the code as well. These are available in [my Github account](https://github.com/zhravan/deno-crud-api).

If you want to clone the project that I am working on, clone oak branch.

```sh
git clone -b ‘oak’ https://github.com/zhravan/deno-crud-api.git
```

Now let’s run the project. Open the terminal/command prompt in the root directory of the project

```sh
deno run --allow-net --allow-write --allow-read --allow-plugin --unstable app.ts
```

`--allow-write` and `--allow-net` are the flags required to give permission to Deno to access network and other resources.
When you run this command for the first time; it will download all the required library files and put them in cache locally in a folder named ./.deno_plugins; which we basically put in .gitignore before committing your code.

##Resources

1. [10 Things I Regret About Node.js - Ryan Dahl - JSConf](https://www.youtube.com/watch?v=M3BM9TB-8yA)
2. [Oak Framework](https://deno.land/x/oak)
3. [Deno - MongoDB Driver](https://deno.land/x/mongo)
4. [Deno is the New Way of JavaScript-Ryan Dahl & Kitson Kelly](https://www.youtube.com/watch?v=1gIiZfSbEAE&t=11s)

Since we are at the very start of the Deno.land and the current scenario looks as though it has a very promising scope for the future. Looking forward to work on the coming frameworks in the Deno environment.

I am already fond of another called Snowlight Framework ( Inspired by Express Framework in Node); which is also available in the GitHub codebase in the ‘SnowLight’ branch.

```sh
git clone -b ‘snowlight’ https://github.com/zhravan/deno-crud-api.git
```

Deno looks like already better than Node as per my point of view. Looking forward to exploring many more frameworks and libraries in the Deno platform.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/p1dk77nycq31syj9sr7t.png)

This is Revision of my [Medium Article](https://medium.com/swlh/api-with-deno-antidote-for-node-js-2940dda58415)

Until then, signing off for the day.
Happy Learning. :)]]></content:encoded>
        <pubDate>Sat, 20 Jun 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Corona : What a year this week has been. . .]]></title>
        <link>https://ohmyscript.com/musings/what-a-year-this-week-has-been/</link>
        <guid>https://ohmyscript.com/musings/what-a-year-this-week-has-been/</guid>
        <description><![CDATA[reflective piece on humanity, karma, and the unexpected ways the pandemic forced us to confront our own civilization.]]></description>
        <content:encoded><![CDATA[# Corona : What a year this week has been. . .

Hi everyone.

Simply trying to put up my thoughts on the current crisis that is debilitating humans. Yes, just trying to collect the words in my head and put up those thoughts here.

Maybe, this perception of mine might sound so cruel to the world or to humanity itself. Maybe, it might sound inhumane as well.

## On Karma and Consequences

I believe, Corona was just an act of **_Karma_**.

> “So, you mean, the world deserved it?
> Did people deserve to die?
> Did People deserve to suffer?”

No, far and away. No one deserves to suffer. No one deserves to lose the ones they love and care. No one deserves to die. And most importantly, even though, death is the destination we all share, no one deserves such a death with so much of suffering.

So you might wonder what am I doing, trying to contradict my own utterance. Perhaps when I say **_Karma,_** I am having a different perception of it. People have perceptions and each of you will perceive it differently the way you understand and see it.

### What Karma Means to Me

When I talk about **_Karma_**, or what my belief about **_Karma_** is,

> It is simply the product of what you do.
> It isn’t your mistake.
> It is not that you deserve to suffer.

Who am I talking about? When I say you, its Humans.
Just take a while, step back and think about it yourself. Think about how ruthless humans have been to the **_Mother Earth._** Have you ever even thought of certain acts of cruelty to animals that we have been doing for so long?

<Image
  src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*ReQnrf-I-uk70FUYOcLmIw.jpeg"
  alt="Earth from space showing the planet's fragility and beauty"
  caption="Our planet, the only home we have"
/>

We all are in a point, in the evolution of the human race from the Pliopithecus to this so-called Modern Homo Sapiens, which supposedly to traverse through from the UNCIVILIZED to CIVILIZED living beings?

## The Fiction of Civilization

But, in actuality, is that that truth? Are humans Civilized? Civilization and Humans being civilized has turned out to be a fiction.

From studying about Civilization in the history books of my class 6th–7th textbooks to the reading newspaper about everyday uncivilized acts done.

### A Forced Pause

This might sound too cruel, but, yes, the World needed this global pandemic to halt them for a while and make them listen to the MOTHER EARTH for a while.
Many have tried protesting, act against the humans' uncivilized acts, eventually, all they could do was try and wait for the success. I don't say the Brutal Act of Corona is the success or even the solution for the problems

<Image
  src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*dqIXKKYc3k3r7aNfRgE_ig.jpeg"
  alt="Nature reclaiming space, showing environmental recovery"
  caption="Nature heals when given a chance"
/>

But the least, **_deforestation_** was never a solution for Human Civilization. **_Polluting the earth_** with mixtures (that never even existed), was never part of the success of the Civilized act of Humans. **_Killing animals_** and talking about their extinction was never part of my biology textbook nor was building a Business empire over earth resource was.

No one really has ever given a thought of it. I am pretty sure about it.

### The Human Condition

> Do you know one funny thing about humans? They never care about anything thou self.

A problem is a problem only when it has to cause any loss or damage to you. Up until they don't really care.

### Unexpected Acts of Civilization

You know why am I writing this thought here? Some time ago, when I approached to shop to buy groceries, I was shocked to see everyone stand in line to buy GROCERIES.

That is so freaking abnormal. Considering the fact that I am in India, where people can not maintain a queue. That is more freaking to see this sort of civilized behavior.

There was another scenario where I came across a NEWS, where a family suffering and struggling to not be able to earn a day’s meager meals. And then, there was a sudden help from the local politician for providing them with things to fulfill their basic necessities.

That was even more surprising for me to see, for a Politician to do an act of Kindness (Off course, not all politicians are the same). Yet so many scenarios that I have come across.

## All it took was a microscopic organism to make us more human.

A virus has locked us completely down from all our inhuman activities.

## What Changed in Us

Today, we respect people in uniforms. People who had no time for their parents or family are now locked down to stay 24/7.

We ensure our household help stays at home and take care of themselves. We have respect for those people who have come out in this outburst of disease, have the courage to do the door-to-door delivery.

All this made me ruminate about **_why_** and **_what_** behind these sudden changes and surprises in Humans? That’s when I felt this pandemic was something that happened because of the Human Act. With this pandemic itself, a little awareness and all humans are going to face a reality check about the Mother Earth’s condition and health.

This global pandemic has taken a lot of lives. So are we have taken from **_Mother Earth_** too. I do not say this justifies what is happening around the world.

## Our Second Chance

All I want to say and convey is that

> It’s high time to act kind.
> It’s time to act like we are all known as, CIVILIZED.
> It is time to be who we are, the civilized beings, HUMANS

This is our second chance and should not mess it up this time. Let us all stand strong and help each other to cope up with this current state.

### Becoming Better Humans

We have by every means, become more CIVILIZED. This pandemic has changed Humans for sure. Maybe not too many, but significantly.

This pandemic has changed us, for, we have become better humans. If these are not acts of civilized homosapiens are not worth keeping, then what are?

To all those out there, if there is a situation where you have a little more than what is sufficient for you. Please give it to those who badly are in need of it. There are quite a lot of people who are suffering a lot for even a day's meager meal. Let us all unite virtually and help those who are suffering.

<Image
  src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*8wadAZ20v2jXHWoB4AYEeA.jpeg"
  alt="People helping each other during difficult times"
  caption="Together, we are stronger"
/>

### Words of Gratitude

Thanks to [sriram c](https://medium.com/@c.sriram.619) for having encouraged me to write my thoughts out, when I shared my thoughts on the current scenario.

Thanks to those warriors who are out there trying to make things happen even in the high-risk and precarious situation and position to protect us all.

Kudos to those doctors, policemen, military, and many others, who are making things happen to keep us sheltered and guarded.

Maybe, you all may not have the same perception about the situation as I do. Maybe my perception may seem so evil or wrong. You might perceive it differently as well.
Do feel free to share your thoughts on it. I would appreciate that.

I am signing off, for now.

> STAY HEALTHY. STAY SAFE. TAKE CARE.
> HELP THOSE WHO NEED YOU]]></content:encoded>
        <pubDate>Sun, 17 May 2020 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Express.js File Structuring Guide]]></title>
        <link>https://ohmyscript.com/blogs/express-file-structuring/</link>
        <guid>https://ohmyscript.com/blogs/express-file-structuring/</guid>
        <description><![CDATA[guide to structuring express.js applications for clean code and maintainability.]]></description>
        <content:encoded><![CDATA[# Express.js File Structuring Guide

The app/file structuring is one way to be Clean Coder. There are no standard structures as such, but, generally followed structure will be discussed below.

I am a beginner. As a beginner, I would explain how did I go about this File Structuring scenario for the Express Application. I have used Express Framework, with MongoDB database. I had used Mongoose as MongoDB Object Modelling for Express Application.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5vzr2cps82xkoamlm9qy.png)

---

> “ There is a luxury in self-reproach. When we blame ourselves we feel no one else has a right to blame us. ”
>
> ~ Oscar Wilde, The Picture of Dorian Gray

It is best to use an Application scaffolder to get you generalised initial structure. I would suggest, go with Express Application Generator or Yeoman Application Generator.

The generated app would have directory structure something of this sort:

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/b19r1f9puza4petmlv6q.png)

## Directory Structure

Here, you can create another folder named “src”.

Copy and paste the routes directory to the src folder.

And also create folders namely:

1.  **_Models_**
2.  **_Routes_** // that already exists
3.  **_Controllers_**
4.  **_Services_**
5.  **_Repositories_**

These are the basic files that ‘ src ’ folder/directory will be holding.

Now, you would wonder what would each of directory hold and do?

- **_Models directory_** will be holding files that keeps the schema/data models that is required for your current project.
- **_Routes directory_** will be holding the route files, where _Routing_ refers to how an application’s endpoints (URIs) respond to client requests. It basically defines your app routes.
- **_Controllers directory_** will be holding the controller files, wherein the controller files deal with validation of the data, whatever that is been sent from the frontend. The request and response handling is taken care in this directory.
- **_Services directory_** will be holding the services files, which deals with the business logic of the API. This is a final filtration before sending it to the Data Access Layer, here we filter the data received into the final payload for querying with the database. The files in this directory will deal with processing the payload to the format, it should be in to be stored into the database.
- **_Data Access Layer/Repositories_** will be dealing with queries that have to be executed based on API. All the CRUD operations for APIs are taken care of in this particular folder.

These are basic application structuring method I opted to go with.

This method of writing code is a stepping stone to writing clean code. Clean code doesn’t have to deal with solving complex logic in a simple way or writing code neatly. Clean coding is an art of writing the code in such a way that, a beginner himself or person from non-programming background should also be able to understand the flow of the code you have written.

Apart from this, we can also go with various extra folders, which will lie above ‘src’ directory.

Say, your front end is asking for data to be sent from the server in some form. Then, you can build a **_transformation layer/transformation folder_**, where you can have files defining functions converting the data received from the database in one format can be transformed to the format your front-end developers ask for.

Also, you can define a folder named **_helpers,_** which will have some functionality that you want to use in multiple APIs or scenarios. So instead of trying to define it everywhere, you can just define it in one place and call that wherever required.

Another folder could be **_middlewares,_** where you can define all your route-level middle-wares once and call it in your routes as chained-functions.

## Digging in programming principles

Make sure you go through some simple concepts like :

- **_KISS(KEEP IT SIMPLE STUPID)_**
- **_DRY (Donot Repeat Yourself)_**
- **_TDA (Tell Don’t Ask)_**
- **_SoC (Separation of Concerns)_**
- **_YAGNI (You ain’t gonna need it)_**

I am not 100% sure that this is the standard format. As a beginner, I believe this is the best way to learn to code in Express or any other framework. As a beginner, I had difficulties to know this, as I believed all the logic are to be dumped into controllers. And then, I learnt it later on, that was wrong.

Every problem statement will have its own suitable way of structuring the files. This is a very common method.

I hope the article helps you understand a little about how to structure your back-end application in Express.js.

To get an clear picture how to folder/code would look like, refer to my next article on "How to write CRUD - API in Express", coming soon.

If any queries, please drop a mail mrshravankumarb@gmail.com

Drop your views as comments.

Signing off till next time :)]]></content:encoded>
        <pubDate>Thu, 10 Oct 2019 00:00:00 GMT</pubDate>
      </item>
      <item>
        <title><![CDATA[Welcome to My Blog]]></title>
        <link>https://ohmyscript.com/blogs/welcome/</link>
        <guid>https://ohmyscript.com/blogs/welcome/</guid>
        <description><![CDATA[An introduction to my writing and what you can expect to find here.]]></description>
        <content:encoded><![CDATA[# Welcome to My Blog

I am excited to share my thoughts on technology, development, and the lessons I have learned along the way. This space will serve as a collection of essays, tutorials, and reflections on building products.

## What to Expect

My writing focuses on making complex technical concepts accessible and interesting. I believe that great documentation and clear explanations are key to helping others learn.

- Developer tools and workflows
- Product development insights
- Technical deep dives
- Career lessons and advice

## Get in Touch

I am always interested in connecting with fellow developers and creators. Feel free to reach out if you would like to discuss any of these topics or collaborate on something interesting.]]></content:encoded>
        <pubDate>Tue, 08 Oct 2019 00:00:00 GMT</pubDate>
      </item>
  </channel>
</rss>