Architecting a Heterogeneous Micro-Frontend System with Module Federation and gRPC-Web


1. Defining the Technical Challenge

In a large enterprise, the evolution of a tech stack is rarely a clean slate affair. We were facing precisely such a situation: our core back-office system is a massive Angular monolith, maintained by a team with deep expertise in the framework. Meanwhile, new user-facing business lines want to leverage Next.js’s Server-Side Rendering (SSR) capabilities to optimize for SEO and first-paint performance. Maintaining two separate systems would create a fragmented user experience and waste development resources. The core challenge, therefore, was this: how can we seamlessly integrate two disparate frameworks like Angular and Next.js within a single application, while ensuring an efficient, type-safe, and well-defined communication contract with our backend microservices?

2. Evaluating Alternative Solutions

When integrating heterogeneous frameworks, traditional solutions come with distinct trade-offs.

Option A: iFrames

This is the oldest and most isolated approach: embedding one application within an <iframe> of another.

  • Pros: Simple to implement. Styles and JavaScript runtimes are completely isolated, posing almost no technical risk.
  • Cons: The user experience is abysmal. iFrame loading, auto-sizing, URL synchronization, and cross-origin communication (postMessage) are all incredibly clunky. More critically, it shatters the seamless feel of a Single-Page Application (SPA) and is detrimental to SEO. In any serious project, this is an obsolete approach that should be avoided.

Option B: Web Components

This involves wrapping Angular components or Next.js pages as standard Web Components (Custom Elements), allowing them to be used natively in the other framework.

  • Pros: Provides a standard, framework-agnostic component model, achieving technical decoupling.
  • Cons:
    1. Bundle Size: Each Web Component needs to embed its framework’s runtime. Even if multiple components share the same framework, bundle optimization can be problematic.
    2. Developer Experience: A significant amount of boilerplate code is required for wrapping, especially when handling property passing and event listening.
    3. Style Isolation: While Shadow DOM offers powerful style isolation, it introduces new complexities for theme sharing and global style overrides.
    4. Communication Mechanism: Cross-component communication typically relies on the native DOM event system. For complex cross-application state management, an additional event bus or state management library is needed, increasing architectural complexity.

3. The Chosen Architecture and Rationale

After careful consideration, we opted for an architecture based on Webpack 5’s Module Federation for micro-frontend integration, coupled with gRPC-Web as the unified communication protocol for the front and back ends.

Why Module Federation?

Module Federation is a more low-level solution that allows a JavaScript application to dynamically load code from another, independently deployed application at runtime. Unlike the component-level encapsulation of iFrames or Web Components, it enables module-level sharing.

  • Near-Native Integration Experience: A remote module is consumed by the host application as if it were a regular asynchronous component. It can be seamlessly integrated anywhere, including within the routing system.
  • Efficient Dependency Sharing: You can explicitly configure shared libraries (e.g., React, Angular, RxJS) between micro-apps. Shared dependencies are loaded only once in the browser, significantly reducing the total bundle size.
  • Independent Deployment & Development: Each micro-app (Remote) can be developed, tested, and deployed independently, aligning with microservice principles.

Why gRPC-Web?

In such a complex, heterogeneous frontend architecture, communication with the backend Go microservices must be robust and efficient.

  • Strongly-Typed Contracts: Using Protocol Buffers (.proto) to define service interfaces and data structures allows for the automatic generation of both Go server code and TypeScript client code. This eliminates a vast class of potential data format errors at compile time, which is crucial for cross-team collaboration.
  • Performance: gRPC uses HTTP/2 for transport and binary serialization, resulting in a smaller payload and higher efficiency compared to JSON over HTTP/1.1.
  • Language-Agnostic: The .proto file serves as the single source of truth. Any language can generate corresponding code from it, facilitating the future introduction of microservices written in other languages.

High-Level Architecture Diagram

Here’s an overview of the architecture using Mermaid.js:

graph TD
    subgraph Browser
        A[Next.js Shell App] -- "Dynamic Import" --> B{Angular Remote Module};
        A -- "gRPC-Web Calls" --> C[Envoy Proxy];
        B -- "gRPC-Web Calls" --> C;
    end

    subgraph Backend Infrastructure
        C -- "gRPC over HTTP/2" --> D[gRPC-Go Service];
        D -- "CRUD" --> E[(Database)];
    end

    F[Developer] -- "Defines" --> G[user.proto];
    G -- "protoc-gen-go" --> D;
    G -- "protoc-gen-ts" --> H[TypeScript Client];

    subgraph Shared Code
        H -- "Used by" --> A;
        H -- "Used by" --> B;
    end

In this architecture, Next.js acts as the main application (Shell), responsible for the overall layout, routing, and user session. The Angular application is exposed as a remote module (Remote), dynamically loaded and rendered within a specific area of a Next.js page. Both applications use the same TypeScript gRPC client, generated from the .proto file, to communicate with the backend. Since browsers do not natively support the gRPC protocol, a proxy like Envoy is required to translate gRPC-Web requests into standard gRPC requests.

4. Core Implementation Overview

We’ll walk through the key steps to implement this architecture.

4.1. Defining the Service Contract (user.proto)

This is the cornerstone of the entire system. A well-defined .proto file serves as the foundation for cross-stack development.

// proto/user/v1/user.proto
syntax = "proto3";

package user.v1;

option go_package = "example/gen/user/v1;userv1";

// UserService defines the RPC methods related to users.
service UserService {
  // GetUser retrieves a user by their ID.
  rpc GetUser(GetUserRequest) returns (GetUserResponse);
}

// User message represents a user entity.
message User {
  string id = 1;
  string name = 2;
  string email = 3;
}

// GetUserRequest is the request for the GetUser RPC.
message GetUserRequest {
  string id = 1;
}

// GetUserResponse is the response for the GetUser RPC.
message GetUserResponse {
  User user = 1;
}

4.2. Backend Implementation (gRPC-Go)

First, we generate the Go server code and implement the service.

Generate Code:

# Install necessary tools
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

# Run the generation command
protoc --proto_path=proto \
  --go_out=gen --go_opt=paths=source_relative \
  --go-grpc_out=gen --go-grpc_opt=paths=source_relative \
  proto/user/v1/user.proto

Server Implementation (cmd/server/main.go):

This implementation should be production-grade, incorporating logging and essential error handling.

package main

import (
	"context"
	"fmt"
	"log"
	"net"

	userv1 "example/gen/user/v1" // Import generated code

	"google.golang.org/grpc"
	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/reflection"
	"google.golang.org/grpc/status"
)

const port = ":9090"

// mockUserData serves as a mock database.
var mockUserData = map[string]*userv1.User{
	"1": {Id: "1", Name: "Alice", Email: "[email protected]"},
	"2": {Id: "2", Name: "Bob", Email: "[email protected]"},
}

// server struct implements the userv1.UserServiceServer interface.
type server struct {
	userv1.UnimplementedUserServiceServer
}

// GetUser implements the RPC method.
func (s *server) GetUser(ctx context.Context, req *userv1.GetUserRequest) (*userv1.GetUserResponse, error) {
	log.Printf("Received GetUser request for ID: %s", req.GetId())

	if req.GetId() == "" {
		// Proper error handling is critical; return standard gRPC status codes.
		return nil, status.Error(codes.InvalidArgument, "User ID cannot be empty")
	}

	user, ok := mockUserData[req.GetId()]
	if !ok {
		return nil, status.Errorf(codes.NotFound, "User with ID '%s' not found", req.GetId())
	}

	return &userv1.GetUserResponse{User: user}, nil
}

func main() {
	lis, err := net.Listen("tcp", port)
	if err != nil {
		log.Fatalf("failed to listen: %v", err)
	}

	s := grpc.NewServer()
	userv1.RegisterUserServiceServer(s, &server{})

	// Register reflection service on gRPC server. This is useful for debugging tools like grpcurl.
	reflection.Register(s)

	log.Printf("gRPC server listening at %v", lis.Addr())
	if err := s.Serve(lis); err != nil {
		log.Fatalf("failed to serve: %v", err)
	}
}

4.3. Configuring the Envoy Proxy

Exposing a gRPC service directly is not feasible in production, as browsers don’t support it. Envoy serves as the critical bridge.

envoy.yaml Configuration:

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: auto
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route:
                  cluster: grpc_service
                  # Enable CORS for gRPC-Web requests
                  cors:
                    allow_origin_string_match:
                      - prefix: "*"
                    allow_methods: GET, PUT, DELETE, POST, OPTIONS
                    allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                    expose_headers: grpc-status,grpc-message
          http_filters:
          - name: envoy.filters.http.grpc_web
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
          - name: envoy.filters.http.cors
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
  clusters:
  - name: grpc_service
    connect_timeout: 0.25s
    type: logical_dns
    # HTTP/2 is required for gRPC
    http2_protocol_options: {}
    lb_policy: round_robin
    load_assignment:
      cluster_name: grpc_service
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              # For Docker Compose, this would be the service name. When running locally, use host.docker.internal.
              socket_address: { address: host.docker.internal, port_value: 9090 }

Run Envoy: docker run --rm -p 8080:8080 envoyproxy/envoy:v1.24.0 -c /etc/envoy/envoy.yaml (you’ll need to mount the config file).

4.4. Generating the Frontend gRPC-Web Client

Now, we generate the TypeScript code for the frontend.

# Install tools
npm install grpc-web
npm install -g protoc-gen-ts

# Generation command
protoc --proto_path=proto \
  --js_out=import_style=typescript,binary:./shared-client/src \
  --grpc-web_out=import_style=typescript,mode=grpcwebtext:./shared-client/src \
  proto/user/v1/user.proto

This will generate User_pb.ts (message types) and UserServiceClientPb.ts (the client). It’s best practice to place these in a shared npm package that both the Next.js and Angular applications can depend on.

4.5. Angular Remote Application: Configuration and Implementation

We need to modify the Angular app’s Webpack configuration to expose it as a consumable Remote.

webpack.config.js:

const { ModuleFederationPlugin } = require('webpack').container;
const deps = require('./package.json').dependencies;

module.exports = {
  output: {
    uniqueName: "angularUserProfile",
    publicPath: "auto"
  },
  optimization: {
    runtimeChunk: false
  },
  plugins: [
    new ModuleFederationPlugin({
      name: "angularUserProfile",
      filename: "remoteEntry.js",
      exposes: {
        // Expose the Angular module, not a single component
        './UserProfileModule': './src/app/user-profile/user-profile.module.ts',
      },
      shared: {
        "@angular/core": { singleton: true, strictVersion: true, requiredVersion: deps["@angular/core"] },
        "@angular/common": { singleton: true, strictVersion: true, requiredVersion: deps["@angular/common"] },
        "@angular/router": { singleton: true, strictVersion: true, requiredVersion: deps["@angular/router"] },
        "rxjs": { singleton: true, strictVersion: true, requiredVersion: deps["rxjs"] },
      }
    })
  ],
};
  • Note: Integrating this config with the Angular CLI requires using @angular-builders/custom-webpack.

user-profile.component.ts Implementation:

import { Component, OnInit } from '@angular/core';
import { UserServicePromiseClient } from 'shared-client/src/user/v1/UserServiceServiceClientPb';
import { GetUserRequest, User } from 'shared-client/src/user/v1/user_pb';
import { Observable, from, throwError } from 'rxjs';
import { map, catchError } from 'rxjs/operators';

@Component({
  selector: 'app-user-profile',
  template: `
    <div *ngIf="user$ | async as user; else loading">
      <h3>Angular Remote Profile</h3>
      <p>ID: {{ user.getId() }}</p>
      <p>Name: {{ user.getName() }}</p>
      <p>Email: {{ user.getEmail() }}</p>
    </div>
    <ng-template #loading>
      <div *ngIf="error; else loadingSpinner">Error: {{ error }}</div>
      <ng-template #loadingSpinner><p>Loading Angular remote...</p></ng-template>
    </ng-template>
  `,
})
export class UserProfileComponent implements OnInit {
  private client: UserServicePromiseClient;
  user$!: Observable<User>;
  error: string | null = null;

  constructor() {
    // This address points to the Envoy proxy
    this.client = new UserServicePromiseClient('http://localhost:8080');
  }

  ngOnInit(): void {
    const request = new GetUserRequest();
    request.setId('1');

    // Convert the Promise to an RxJS Observable to align with the Angular ecosystem
    this.user$ = from(this.client.getUser(request)).pipe(
      map(response => response.getUser()),
      catchError(err => {
        this.error = err.message;
        return throwError(() => new Error(err.message));
      })
    );
  }
}

4.6. Next.js Host Application: Configuration and Implementation

Next.js needs to be configured as the Host to dynamically load the Angular Remote.

next.config.js:

const { ModuleFederationPlugin } = require('webpack').container;
const deps = require('./package.json').dependencies;

module.exports = {
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.plugins.push(
        new ModuleFederationPlugin({
          name: 'nextHost',
          remotes: {
            // Define the location of the remote
            angularUserProfile: 'angularUserProfile@http://localhost:4200/remoteEntry.js',
          },
          shared: {
            "react": { singleton: true, strictVersion: true, requiredVersion: deps.react },
            "react-dom": { singleton: true, strictVersion: true, requiredVersion: deps["react-dom"] },
          },
        })
      );
    }
    return config;
  },
};
  • A common pitfall: The Next.js Webpack config runs for both server-side and client-side builds. Module Federation should only be applied on the client (!isServer).

Dynamic Loading Component (components/AngularProfileLoader.js):

import React, { useRef, useEffect } from 'react';

// Using next/dynamic for lazy loading is handled at the page level
const loadAngularModule = async () => {
  // Load the module factory from the remote
  const moduleFactory = await import('angularUserProfile/UserProfileModule');
  const { UserProfileModule } = moduleFactory;

  // Dynamically import Angular core libraries to bootstrap the app
  const { platformBrowserDynamic } = await import('@angular/platform-browser-dynamic');
  const { enableProdMode, NgModuleRef } = await import('@angular/core');
  
  // Avoid re-bootstrapping
  if (window.angularApp) {
    window.angularApp.destroy();
  }

  enableProdMode();
  
  // Bootstrap the Angular module
  const ngModuleRef = await platformBrowserDynamic().bootstrapModule(UserProfileModule);
  window.angularApp = ngModuleRef;
};

const AngularProfileLoader = () => {
  const angularRootRef = useRef(null);

  useEffect(() => {
    loadAngularModule();

    // Clean up on component unmount
    return () => {
      if (window.angularApp) {
        window.angularApp.destroy();
        window.angularApp = undefined;
      }
    };
  }, []);

  // The Angular component will be rendered inside this DOM element
  // The selector for UserProfileComponent is 'app-user-profile'
  return <div ref={angularRootRef}><app-user-profile /></div>;
};

export default AngularProfileLoader;

In a Next.js page, use next/dynamic to load this Loader component, ensuring it only renders on the client-side.

import dynamic from 'next/dynamic';

const DynamicAngularProfile = dynamic(
  () => import('../components/AngularProfileLoader'),
  { ssr: false, loading: () => <p>Loading remote component...</p> }
);

export default function Home() {
  return (
    <div>
      <h1>Next.js Host Application</h1>
      <p>Below is a remote component served from an Angular application.</p>
      <hr />
      <DynamicAngularProfile />
    </div>
  );
}

5. Architectural Extensibility and Limitations

This architectural pattern offers powerful extensibility. Adding a new React or Vue micro-frontend is a matter of following a similar Module Federation setup. Likewise, adding new Go microservices on the backend only requires defining a new .proto file and implementing the service; the communication pattern remains unchanged.

However, this architecture is no silver bullet. Its implementation and maintenance present non-trivial challenges:

  1. Build and Deployment Complexity: Module Federation configuration is sensitive to Webpack versions and loaders. Debugging cross-framework configurations can be very time-consuming. CI/CD pipelines must be carefully orchestrated to coordinate the versions of independently deployed micro-frontends.
  2. Shared Dependency Governance: While dependency sharing reduces bundle size, it also introduces the risk of “version hell.” If a core shared library (like RxJS) requires a major version upgrade, all consuming applications may need to be updated simultaneously, which undermines the independence of micro-frontends. A strict versioning and testing strategy is essential.
  3. Runtime Coupling: The Host and Remote applications are tightly coupled at runtime. If the remoteEntry.js file fails to load, or a runtime error occurs within the Remote app, it can crash part or all of the Host application. Robust Error Boundaries and fallback strategies are necessary.
  4. Local Development Experience: Running and debugging multiple frontend applications and backend services simultaneously places high demands on the development environment. Investment is needed to build unified scaffolding and dev servers to streamline this process.
  5. Cross-Framework Communication: Module Federation itself does not provide a cross-framework state management solution. Simple communication can be handled via Custom Events or props, but complex scenarios still require a separate, framework-agnostic state management library (e.g., Redux or MobX), which adds another layer of complexity to the overall architecture.

  TOC