Back to Blog
Building Microservices with .NET: A Practical Guide
Engineering
12 min read

Building Microservices with .NET: A Practical Guide

Our approach to designing and implementing microservices architecture using .NET, PostgreSQL, and message queues.

Microservices architecture promises independent deployability, technology flexibility, and team autonomy. But in practice, poorly implemented microservices can be worse than a well-structured monolith. After building multiple microservice-based platforms at Innoware, we've developed a pragmatic approach that captures the benefits while managing the inherent complexity.

The first question we always ask is: "Does this project actually need microservices?" For early-stage products with small teams, a modular monolith is often the better choice. Microservices make sense when you need independent scaling, have multiple teams working in parallel, or need different technology stacks for different parts of the system. Starting monolithic and extracting services as needed is a valid and often superior strategy.

Service Boundaries

The most critical decision in microservices is where to draw the boundaries. Get it wrong, and you'll spend more time coordinating between services than you would have in a monolith. We use domain-driven design to identify bounded contexts, which naturally map to service boundaries.

A bounded context is a part of the business where a particular model applies. In a healthcare platform, for example, "Patient" means something different in scheduling (availability, preferences, history) versus billing (insurance, payment methods, outstanding balance). These are two bounded contexts, and they should be two services.

We also apply the "team ownership" heuristic: if a single team can own the entire lifecycle of a service (development, testing, deployment, monitoring), the boundary is probably right. If a change frequently requires coordinating across multiple teams, the boundaries need to be redrawn.

Communication Patterns

We use a combination of synchronous and asynchronous communication, and choosing the right pattern for each interaction is crucial. Synchronous communication (we prefer gRPC over REST for service-to-service calls due to its type safety and performance) is appropriate when the caller needs an immediate response - for example, validating a user's authentication token.

Asynchronous communication via message queues (we use RabbitMQ for task queues and Kafka for event streaming) is appropriate for everything else. When an order is created, the order service doesn't need to wait for the email to be sent or the analytics to be recorded. It publishes an event and moves on.

We implement the Outbox Pattern to ensure reliable event publishing. Instead of publishing events directly to the message broker (which can fail and leave the system in an inconsistent state), we write events to an outbox table in the same database transaction as the business operation. A separate process reads the outbox and publishes events to the broker. This guarantees at-least-once delivery without distributed transactions.

Data Management

Each service owns its data store - this is non-negotiable. Sharing databases between services creates coupling that undermines the entire point of microservices. We primarily use PostgreSQL for transactional data (its JSONB support makes it flexible enough for most use cases) and Redis for caching, rate limiting, and session management.

For cross-service queries, we use the CQRS pattern with dedicated read models. When the product catalog service publishes a "ProductUpdated" event, the search service updates its Elasticsearch index, and the recommendation service updates its graph database. Each service has the data it needs in the format that's optimal for its queries.

Observability

In a distributed system, observability isn't a nice-to-have - it's a survival requirement. When a request fails, you need to trace it across potentially dozens of services to find the root cause. We implement three pillars of observability from day one.

Structured logging with Serilog ensures that every log entry includes a correlation ID, service name, and relevant business context. We ship logs to Seq for searching and analysis. Distributed tracing with OpenTelemetry tracks requests across service boundaries, showing exactly where time is spent and where failures occur. Metrics with Prometheus and Grafana dashboards give us real-time visibility into system health, including request rates, error rates, latency percentiles, and resource utilization.

Deployment

All our services are containerized with Docker and orchestrated with Kubernetes. Our CI/CD pipeline (Jenkins) ensures that each service can be built, tested, and deployed independently in under 10 minutes. We use rolling deployments with health checks to ensure zero-downtime releases, and automated rollback if health checks fail after deployment.

We also maintain a staging environment that mirrors production topology. Every pull request is automatically deployed to a preview environment where integration tests run against the complete service mesh. This catches cross-service issues before they reach production.