Surprising Backend Tips Revealed After 2 Years of Real-World Experience

When I started working on backend systems I didn’t know what I was doing. I was there repeating logic, building fragile environments, chasing the next trendy framework and overcomplicated things that didn’t need to be complex. Most of the pain came from missing the fundamentals. Things no one really teaches until you’ve broken enough code to learn them yourself.
This blog is a collection of lessons I wish I had earlier. It’s not about being perfect but about building systems that are clean, scalable, and maintainable in real-world conditions. Whether you’re solo or in a small team, these practices will help you ship faster, debug less, and build with confidence. Let’s get started.
📘 Leverage OpenAPI and Documentation Standards
OpenAPI has become the standard specification for modern APIs. It allows you to document your endpoints, DTOs, responses, and error codes in a structured, scalable way. Once defined, your specification can be visualized using open-source tools like SwaggerUI or Scalar, giving your team and clients a clear view of your API’s surface.
Implementation might be different across codebases. Some teams still write specs manually, which I don’t recommend. Others use tools like zod-to-openapi or framework-native solutions to generate specs directly from validation schemas and DTOs. Personally I prefer a programmatic approach, keeping the spec in sync with the codebase ensures accuracy and reduces friction.
💬 What About the Client? Common Pitfalls & Solutions
So far, we’ve focused on backend documentation. You might visit your generated docs at:
http://localhost:3000/docs
…and see a beautifully rendered list of endpoints. But if you’re a Full Stack Engineer, a solo dev, or part of a frontend team, manually recreating every type and contract on the client side is a recipe for inconsistency. Even with AI assistance, it doesn’t scale; especially when schemas evolve.
That’s why I recommend using libraries like OpenAPI TypeScript. These tools generate type-safe clients directly from your OpenAPI spec, ensuring your frontend stays aligned with your backend without redundant effort. In our mono-repo setup, this approach has been a game-changer.
🧠 Separate Business Logic
This isn’t a blog about hexagonal architecture, clean architecture, or MVC and I’m not here to tell you which one to follow. What matters is separation. You need to understand how to structure your application so that logic isn’t tangled in a single file or scattered across untraceable layers.
✨ What Separation Actually Means
Repositories → Handle data access and persistence
Controllers → Receive requests, delegate to services
Services → Contain business logic, orchestrate flows
Infrastructure Layers → External systems (e.g., DB, cache, queues)
DTOs → Define shape and contract of data
Entities / Tables → Define how the core business model or entity will be created
Routes → Map endpoints to controllers
Validations → Ensure data integrity before logic runs
Types → Enforce clarity across the stack
Dependency Injection / Containers → Manage lifecycle and initialization
❌ What to avoid
I’ve seen this countless of times. It’s okay for prototypes and quick iterations but please, don’t come up with this in a project that will scale:
app.post('/create-user', async (req, res) => {
const { name, email } = req.body;
const user = await db.insert({ name, email });
res.json(user);
});
✅ What to aim for
// route.ts
router.post('/create-user', userController.createUser);
// controller.ts
export const createUser = async (req, res) => {
const user = await userService.create(req.body);
res.json(user);
};
// service.ts
export const create = async (data) => {
validateUser(data);
return userRepository.insert(data);
};
📁 Folder Structure and Architecture
You don’t need to follow a single architectural dogma. Whether it’s Clean Architecture, DDD, Screaming Architecture, or a hybrid, what matters is that your structure scales and your team can read it without friction.
Personally, I lean toward a layered pattern that balances clarity and modularity:
API Layer → Routes, Controllers, Schemas, Services, Repositories, Validations, Entities
Global Layer → Shared infrastructure (AWS SDKs, Sentry, Stripe, etc.), types, utilities, and specs
📋 Document Everything
In the codebase I’m currently working on, there are several non-obvious configurations that, if undocumented, would lead to significant friction and confusion. For example, due to limitations with ESM modules and specific ORM behaviors, we must compile the backend application before executing database migrations. Additionally, our seeding process involves junction tables that are essential for scalability, and each project within the monorepo includes its own .env.example file to manage environment variables.
Without clear documentation of:
The ESM module constraints impacting migration execution.
The seeding logic required to initialize the application correctly.
The environment variables distributed across modules.
The onboarding experience would be chaotic, and maintaining the system would become unnecessarily burdensome.
✨ Best Practices
If you uncover a cryptic configuration, document it.
If you discover a way to improve CI/CD, share it and document it.
If you encounter a bug that only you understand, fix it and document it immediately.
🎹 DRY and Design Patterns
When I first joined this backend codebase, I made a classic mistake. I created a separate repository for every single entity, even when their logic was nearly identical…
For example, we had tables for daily, weekly, and monthly tasks, and I wrote three distinct repositories with almost the same CRUD operations:
Technically, this worked. But it introduced a long-term problem: Every schema change, every refactor, every new feature had to be triplicated. It was fragile, redundant, and emotionally exhausting.
// dailyTask.repository.ts
export const createDailyTask = (data) => db.dailyTask.create(data);
// weeklyTask.repository.ts
export const createWeeklyTask = (data) => db.weeklyTask.create(data);
// monthlyTask.repository.ts
export const createMonthlyTask = (data) => db.monthlyTask.create(data);
📚 Refactoring with DRY Principles
To resolve this, I consolidated the logic into a generic task repository using abstraction and shared patterns:
// task.repository.ts
export const createTask = (type: 'daily' | 'weekly' | 'monthly', data) => {
return db[`${type}Task`].create(data);
};
You can apply guard clauses, strategy patterns, or other abstractions depending on your context. The key is to avoid duplicating logic that behaves the same across entities.
✅ Benefits of This Approach
Reduces duplication
Centralizes logic
Simplifies future changes
Improves testability and readability
Honors the DRY principle
🧠 KISS – Don’t Overcomplicate Things
No. You don’t need an AbstractClassAppInitializationFactory when you’re not even in the market yet.
The purpose of an MVP (Minimum Viable Product) or MLP (Minimum Lovable Product) is to deliver something of value to your initial users. KISS is not an excuse to ignore best practices. You SHOULD still use reusable patterns, structure your code, apply Dependency Injection, and follow clean conventions. But complexity should be introduced only when it’s necessary.
You can scale with Redis. You can scale with Nest.js. You can scale with a monolith that’s well-structured.
But if you’re building abstract design patterns that delay delivery just to sound smart you’re not building a product. You’re digging your own grave. Especially if your team is small or if you’re solo.
❌ Bad - Too Much Initial Complexity
interface PostDeletionStrategy {
execute(postId: string, userId: string): Promise<void>;
}
class DeletePostHandler implements PostDeletionStrategy {
constructor(
private repo: {
getById: (id: string) => Promise<{ userId: string } | null>;
deleteById: (id: string) => Promise<void>;
},
private auth: {
isAuthorized: (userId: string, ownerId: string) => boolean;
}
) {}
async execute(postId: string, userId: string): Promise<void> {
const post = await this.repo.getById(postId);
if (!post) throw new Error('PostNotFound');
if (!this.auth.isAuthorized(userId, post.userId)) throw new Error('Forbidden');
await this.repo.deleteById(postId);
}
}
class PostService {
constructor(private deletion: PostDeletionStrategy) {}
async delete(postId: string, userId: string) {
await this.deletion.execute(postId, userId);
}
}
const service = new PostService(new DeletePostHandler(repo, auth));
await service.delete('post123', 'user456');
This works but there’s too much complex logic for a single operations. You’ve introduced interfaces, base handlers, and strategy patterns before even validating the product…
✅ The Fix: Just Go to Market with Good Practices
type Dependencies = {
repo: {
getById: (id: string) => Promise<Post | null>;
deleteById: (id: string) => Promise<void>;
};
};
export function PostService({ repo }: Dependencies) {
return {
deleteById: async (id: string, userId: string) => {
const post = await repo.getById(id);
if (!post) throw new Error('Not found');
if (post.userId !== userId) throw new Error('Forbidden');
await repo.deleteById(id);
},
};
}
This version is clean, testable, and scalable. It honors separation of concerns without being complex. It gets you to market faster and will allow you to work on other features as well.
🐳 Docker - Real Life Development Environments
A lot of people ask me why I love Docker. One year ago, a client asked me to run a project for them. They cloned the GitHub repo, followed the installation instructions, and then… CHAOS.
They had to manually set up PostgreSQL, Redis, Node.js, PNPM, Next.js, run database migrations, configure Stripe and other external integrations... It was one of the most awkward dev experiences I’ve ever had. Months later, I learned Docker and Docker Compose, and it changed everything. I now save 8+ hours a week by avoiding manual setups, Linux environment configs, and repetitive integration commands. Everything’s dockerized and automated in my dev environment.
Do yourself a favor: leverage automation scripts in your IDE, learn Docker and containerization, and document every environment variable in a .env.example file.
You don’t want to go through what I did. TRUST ME.
🧾 Clear API Responses
Clear API responses matter. Don’t manually return JSON objects in every controller. You’ll forget field names, break consistency, and end up with endpoints that return "info" instead of "data" because you got creative.
Instead, build a simple response adapter. It can be just a dictionary of methods that wrap your response logic. This way, every success, error, or edge case goes through the same structure. You define it once and call it everywhere. It’s not fancy, it’s just maintainable. Your future self will thank you when debugging a 409 Conflict at scale doesn’t feel like a Dark Souls bossfight. Keep it clean. Keep it predictable.
❌ Bad Practice - Manually Creating the Response
return res.status(200).json({
success: true,
data: someData
statusCode: 200
})
✅ Good Practice - Using an Adapter
An adapter is a artifact which you can use to turn data from external systems into your own. Ideal for API responses and data transformation such as the example down below!
export const ApiResponse = {
success: <T>(res: Response, data?: T, message = 'Success') =>
res.status(200).json({ statusCode: 200, message, data }),
created: <T>(res: Response, data?: T, message = 'Created') =>
res.status(201).json({ statusCode: 201, message, data }),
error: (res: Response, message = 'Error', statusCode = 500, error?: any) =>
res.status(statusCode).json({ statusCode, message, error }),
notFound: (res: Response, message = 'Not Found') =>
res.status(404).json({ statusCode: 404, message }),
badRequest: (res: Response, message = 'Bad Request', error?: any) =>
res.status(400).json({ statusCode: 400, message, error }),
};
Which you can use like this in a controller now!
return ApiResponse.success(res, serviceResponse, 'Successfully fetched the user.')
🔥 Leverage Cache
Why does cache even matter? Let’s talk about how we use junction tables in an application. When setting up the app in a new environment, we need to seed the database with categories, filters, and metadata. It’s not fun, but it’s the most scalable way to keep things normalized. We tried using JSON blobs and string[] arrays, but they didn’t scale well. So we built a proper repository and added cache.
We constantly query for things like interests and categories. These don’t change often, maybe once every few months. But we were still hitting the database every time. That works with 100 users. Not with 10,000. It’s a waste of resources at scale.
const TOPICS_CACHE_KEY = 'topics:all';
const CACHE_TTL = 60 * 60 * 24; // 24 hours
export const UserTopicsRepository = (db: any, redis: any) => {
const getTopics = async (): Promise<{ id: string; slug: string }[]> => {
const cached = await redis.get(TOPICS_CACHE_KEY);
if (cached) return JSON.parse(cached);
const topics = await db.select().from('Topics');
await redis.set(TOPICS_CACHE_KEY, JSON.stringify(topics), { EX: CACHE_TTL });
return topics;
};
const syncUserTopics = async (userId: string, topicSlugs: string[]) => {
await db.delete('UserTopics').where({ userId });
if (topicSlugs.length === 0) return;
const allTopics = await getTopics();
const slugToId = new Map(allTopics.map(t => [t.slug, t.id]));
const topicsToInsert = topicSlugs.map(slug => {
const id = slugToId.get(slug);
if (!id) throw new Error(`Topic '${slug}' not found`);
return { userId, topicId: id };
});
await db.insert('UserTopics').values(topicsToInsert);
};
return { syncUserTopics };
};
That’s why we use Redis. Another would be, when anonymous users view public posts, you can’t afford to fetch the same content over and over. These users don’t carry state, but they bring volume. So we cache what’s static: metadata, categories and tags. We only fetch dynamic states like isLiked or isFollowing when needed.
Follow a simple rule: cache what doesn’t change often. This protects your database, improves response times, and keeps your system scalable.
🎹 Final Thoughts
I used to hate backend development 1 year and a half ago. But it was because I didn’t know those principles and I was constantly repeating myself, creating fancy implementations that didn’t scale and duplicating my code over and over again.
The good thing is that you don’t need to suffer through fragile environments, weird abstractions, or inconsistent APIs. You just need to build with intention. Whether it’s Docker, clean response adapters, DRY repositories, or Redis caching; every decision should move you closer to clarity, not complexity.
If you’re solo or in a small team, your codebase is your second brain. Don’t clutter it with patterns you don’t need (or understand) yet. Don’t delay delivery to sound smart. Build something that works, scales, and makes sense to the next person who reads it; even if that person is you, three months from now.
Honestly speaking, this is the guide I’d shared with my past self. I had to learn things the hard way. But you don’t have to. Remember. Never stop learning, don't be afraid of breaking things and always keep experimenting.



