The accumulation of technical debt often begins with a seemingly innocuous component. We were facing a classic scenario: a core business application, a Ruby on Rails monolith, with a user authentication system built on a custom, stateful token mechanism. As business traffic surged, this authentication module became the system’s primary performance bottleneck. Every API request required a validation round-trip with the backend’s session store, quickly exhausting database connections under high concurrency and causing request latency to skyrocket. Our new frontend project, built on a React and MobX stack, was designed for a fluid single-page application experience but was severely hampered by this antiquated authentication mechanism.
The problem before us was clear-cut: how could we resolve the authentication bottleneck and provide a modern, stateless, and standardized authentication solution (like JWT) for the new frontend, all without rewriting the core Ruby application?
At a Crossroads in Architectural Decisions
During our technical planning meeting, we explored two mainstream solutions.
Option A: Build a Dedicated Authentication Microservice
This was the most direct approach: extract the authentication logic from the Ruby monolith, rewrite it in a high-performance language like Go or Rust, and deploy it as an independent microservice.
Pros:
- Clear Separation of Concerns: The new service could be iterated upon and scaled independently.
- Modern Tech Stack: We could completely escape the performance limitations of Ruby.
- Centralized Authentication: It could provide unified authentication capabilities for all services across the company.
Cons:
- High Development Cost: Rewriting the authentication logic meant dealing with all the historical baggage, including multiple password encryption schemes and complex user permission models. The workload was immense.
- Complex Data Synchronization: The new service needed access to user data. This would require either a database split or the introduction of a data synchronization mechanism. Whether using dual-writes, CDC, or scheduled syncs, each option introduces new complexity and potential data inconsistency risks.
- Increased Operational Burden: Introducing a new service, a new database, and new deployment pipelines places greater demands on the operations team. In any real-world project, the long-term maintenance cost of any new component must be evaluated.
Option B: Offload Logic to the API Gateway Layer
The core idea of this approach is to leverage the extensibility of an API gateway to handle authentication logic upfront. The client interacts with the gateway, which then communicates with the old backend authentication system. The gateway’s role is to convert the old, stateful token into a new, stateless JWT before proxying the request to the upstream service.
Pros:
- Transparent to the Backend: The Ruby monolith requires zero code changes and continues to use its existing authentication method.
- Significant Performance Improvement: High-frequency token validation is offloaded to the high-performance gateway layer, protecting the fragile backend service.
- Rapid Implementation: Eliminates the need for complex data migration, resulting in a much shorter development cycle.
Cons:
- Gateway Becomes a Critical Bottleneck: If the gateway plugin’s performance is poor or it has defects, all traffic will be affected.
- Technology Choice is Crucial: The success of this solution hinges directly on the programming language and execution model of the gateway plugin.
After weighing the trade-offs, we chose Option B. It was less intrusive, more risk-averse, and allowed us to quickly solve the most pressing performance issues with minimal cost. This is a highly pragmatic engineering decision when dealing with legacy systems. The focus then shifted to selecting the API gateway and the technology for implementing the plugin. We ultimately decided on Apache APISIX and chose Rust to write an External Plugin.
- Why APISIX? It’s built on Nginx and LuaJIT, delivering exceptional performance. More importantly, it features a robust plugin ecosystem and a flexible plugin execution mechanism, particularly its support for the External Plugin Runner. This allows us to write plugin logic in other languages (like Java, Go, Python, and Rust) without being confined to Lua.
- Why Rust?
- Extreme Performance: Rust offers runtime performance comparable to C/C++ without the pauses associated with a Garbage Collector (GC). This is critical for a gateway that must handle massive request volumes with extreme sensitivity to latency.
- Memory Safety: Rust’s ownership and borrow checker eliminate memory safety issues like null pointers, dangling pointers, and data races at compile time. For a core network component running 24/7, stability is paramount.
- Powerful Concurrency: Rust’s
async/awaitsyntax and mature async runtimes (like Tokio) make writing highly concurrent network applications simple and efficient.
Core Implementation Overview
Our goal was to create an APISIX external plugin in Rust that could intercept specific API requests and execute the following logic:
- Login Flow: For login requests (
/api/auth/login), the plugin would pass them through to the backend Ruby application. When the Ruby app successfully validates the credentials and returns its customlegacy_token, the plugin intercepts the response. It then calls an internal endpoint (or queries the database directly) to fetch the user’s detailed information and permissions, and generates a standard JWT to return to the frontend. - Business Request Flow: For subsequent business requests carrying a JWT, the plugin performs signature verification directly at the gateway layer. Once validated, it injects user information from the JWT (like the user ID) into the request headers before forwarding the request to the upstream Ruby service. This way, the Ruby service no longer needs to handle token validation; it can simply trust the user information from the request headers.
Here is the interaction flow diagram for this architecture:
sequenceDiagram
participant Client as Client (MobX SPA)
participant Gateway as APISIX Gateway
participant RustPlugin as Rust Plugin Runner
participant RubyAuth as Legacy Ruby Auth Service
Client->>+Gateway: POST /api/auth/login (credentials)
Gateway->>+RubyAuth: Forward /api/auth/login request
RubyAuth-->>-Gateway: Response with `legacy_token`
Note over Gateway,RustPlugin: Gateway intercepts response
Gateway->>+RustPlugin: Process response with `legacy_token`
RustPlugin->>+RubyAuth: GET /internal/user_info (using `legacy_token`)
RubyAuth-->>-RustPlugin: User profile & permissions
RustPlugin-->>-Gateway: Generate new JWT
Gateway-->>-Client: Respond with standard JWT
Client->>+Gateway: GET /api/orders (Authorization: Bearer )
Note over Gateway,RustPlugin: Gateway intercepts request
Gateway->>+RustPlugin: Verify JWT signature & claims
RustPlugin-->>-Gateway: Verification OK, inject User-ID header
Gateway->>+RubyAuth: Forward request with `X-User-ID` header
RubyAuth-->>-Gateway: Response data
Gateway-->>-Client: Forward response data
APISIX and Rust Plugin Configuration
First, we need to enable the external plugin runner in APISIX’s config.yaml and specify the communication method. In a production environment, using a Unix Domain Socket is more efficient and secure than a TCP port.
config.yaml
apisix:
# ... other configurations
# enable_real_ip: true
ext-plugin:
cmd:
# The startup command for the plugin runner; APISIX will manage this process
- /path/to/your/rust_plugin_runner
path: /tmp/apisix-rust.sock # Use a Unix Domain Socket
Next, we create an APISIX route to bind this external plugin to the APIs we want to protect.
APISIX Route Configuration (via Admin API)
curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"uri": "/api/*",
"plugins": {
"ext-plugin-pre-req": {
"conf": [
{ "name": "rust-auth-plugin", "value": "{\"jwt_secret\":\"your-super-secret-key\"}" }
]
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"ruby-app:3000": 1
}
}
}'
Here, we define a plugin named rust-auth-plugin and configure it to run in the ext-plugin-pre-req phase. The plugin’s configuration (like the JWT secret) is passed as a JSON string via the value field.
Rust Plugin Core Code Implementation
Now for the most critical part: coding the Rust plugin. We’ll use the apisix-rust-plugin-sdk crate to simplify development.
Cargo.toml
[package]
name = "apisix-rust-auth-plugin"
version = "0.1.0"
edition = "2021"
[dependencies]
apisix-rust-plugin-sdk = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
jsonwebtoken = "8"
reqwest = { version = "0.11", features = ["json"] }
chrono = "0.4"
tracing = "0.1"
tracing-subscriber = "0.3"
src/main.rs
use apisix_rust_plugin_sdk::{
bindings::{apisix_conf_get_str, apisix_resp_set_body, apisix_resp_set_header},
filter::{Filter, FilterAction, FilterContext, FilterStatus},
http::{Method, Request},
log::info,
utils::yield_now,
Plugin,
};
use chrono::Utc;
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
use reqwest::Client;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::Arc;
// Define the plugin configuration struct
#[derive(Deserialize)]
struct AuthPluginConfig {
jwt_secret: String,
}
// Define the JWT Claims
#[derive(Debug, Serialize, Deserialize)]
struct Claims {
sub: String, // Subject (user ID)
exp: usize, // Expiration Time
roles: Vec<String>,
}
// The main plugin struct
struct AuthPlugin {
client: Arc<Client>,
}
impl Plugin for AuthPlugin {
type Conf = AuthPluginConfig;
type Ctx = ();
fn new(_conf: Self::Conf) -> Self {
// In a production environment, the HTTP client should be configured more carefully,
// e.g., setting connection pool size, timeouts, etc.
let client = Arc::new(Client::new());
Self { client }
}
fn name() -> &'static str {
"rust-auth-plugin"
}
}
// Implement the Filter trait to handle requests
#[async_trait::async_trait]
impl Filter for AuthPlugin {
async fn on_http_request(&mut self, conf: &AuthPluginConfig, ctx: &mut FilterContext) -> FilterAction {
let req = ctx.req();
let path = req.path();
// For login requests, pass them through to the backend Ruby application
if path == "/api/auth/login" && req.method() == Method::POST {
info!("Login request detected, passing through.");
return FilterAction::Continue;
}
// For non-login requests, validate the JWT
let auth_header = match req.header("Authorization") {
Some(h) => h,
None => {
// Production-grade error handling should return structured JSON
ctx.resp()
.set_status_code(401)
.set_body("Missing Authorization Header".as_bytes().to_vec());
return FilterAction::Stop;
}
};
if !auth_header.starts_with("Bearer ") {
ctx.resp()
.set_status_code(401)
.set_body("Invalid token format".as_bytes().to_vec());
return FilterAction::Stop;
}
let token = &auth_header[7..];
let decoding_key = DecodingKey::from_secret(conf.jwt_secret.as_ref());
let validation = Validation::default();
match decode::<Claims>(token, &decoding_key, &validation) {
Ok(token_data) => {
// Verification successful, inject user info into request headers
ctx.req_mut()
.headers_mut()
.insert("X-User-ID", token_data.claims.sub.as_str());
ctx.req_mut()
.headers_mut()
.insert("X-User-Roles", token_data.claims.roles.join(",").as_str());
FilterAction::Continue
}
Err(e) => {
info!("JWT validation failed: {}", e);
ctx.resp()
.set_status_code(401)
.set_body("Invalid or expired token".as_bytes().to_vec());
FilterAction::Stop
}
}
}
// Handle the token transformation in the response phase after a successful login
async fn on_http_response(&mut self, conf: &AuthPluginConfig, ctx: &mut FilterContext) -> FilterStatus {
let req = ctx.req();
let path = req.path();
let resp_status = ctx.resp().status_code();
// Only process successful responses for login requests
if path == "/api/auth/login" && resp_status >= 200 && resp_status < 300 {
info!("Intercepting successful login response.");
let body = ctx.resp().body();
let legacy_data: Result<HashMap<String, String>, _> = serde_json::from_slice(&body);
if let Ok(data) = legacy_data {
if let Some(legacy_token) = data.get("legacy_token") {
// Error handling here is crucial.
// If fetching user info fails, we must return an error to the client, not a success.
match self.fetch_user_info(legacy_token).await {
Ok((user_id, roles)) => {
let expiration = Utc::now()
.checked_add_signed(chrono::Duration::hours(24))
.expect("valid timestamp")
.timestamp();
let claims = Claims {
sub: user_id,
exp: expiration as usize,
roles,
};
let header = Header::default();
let encoding_key = EncodingKey::from_secret(conf.jwt_secret.as_ref());
match encode(&header, &claims, &encoding_key) {
Ok(jwt) => {
let new_body = serde_json::json!({ "jwt_token": jwt });
ctx.resp().set_status_code(200);
ctx.resp().set_body(new_body.to_string().into_bytes());
}
Err(_) => {
// JWT encoding failure is a server-side internal error
ctx.resp().set_status_code(500);
ctx.resp().set_body("Failed to generate token".as_bytes().to_vec());
}
}
}
Err(e) => {
info!("Failed to fetch user info: {}", e);
ctx.resp().set_status_code(503); // Service Unavailable
ctx.resp().set_body("Auth backend service is unavailable".as_bytes().to_vec());
}
}
}
}
}
FilterStatus::Done
}
}
// A helper function to fetch user info from the Ruby backend
// In a real project, this might be an internal RPC call or a request to a specific API endpoint.
async fn fetch_user_info(&self, legacy_token: &str) -> Result<(String, Vec<String>), reqwest::Error> {
info!("Fetching user info with legacy token");
let user_info_url = "http://ruby-app:3000/internal/user_info";
// The HTTP client should be taken from the plugin instance to reuse the connection pool.
let response = self.client
.get(user_info_url)
.header("X-Legacy-Token", legacy_token)
.send()
.await?;
if response.status().is_success() {
let user_data: HashMap<String, serde_json::Value> = response.json().await?;
let user_id = user_data.get("id").and_then(|v| v.as_str()).unwrap_or_default().to_string();
let roles = user_data.get("roles")
.and_then(|v| v.as_array())
.map(|arr| arr.iter().map(|r| r.as_str().unwrap_or_default().to_string()).collect())
.unwrap_or_default();
Ok((user_id, roles))
} else {
// Convert the HTTP error into a processable error type.
Err(response.error_for_status().unwrap_err())
}
}
// Register the plugin
#[no_mangle]
pub extern "C" fn apisix_plugin_init() {
AuthPlugin::register();
}
This code demonstrates the plugin’s core logic, including its attachment points in the request/response phases, JWT generation and validation, asynchronous HTTP communication with the backend, and critical error handling. A common mistake is to handle the fetch_user_info failure improperly, leading to an inconsistent state being returned to the client. Our implementation ensures that if the backend dependency is unavailable, a 503 Service Unavailable is returned, which is crucial for rapid problem diagnosis.
Frontend State Management and Toolchain
On the frontend, we use MobX to manage authentication state. With a standard JWT, state management becomes very clean.
// stores/AuthStore.js
import { makeAutoObservable, runInAction } from 'mobx';
import axios from 'axios';
class AuthStore {
jwt = localStorage.getItem('jwt_token') || null;
userInfo = null;
isAuthenticated = false;
error = null;
constructor() {
makeAutoObservable(this);
this.checkAuth();
}
async login(credentials) {
try {
const response = await axios.post('/api/auth/login', credentials);
const { jwt_token } = response.data;
runInAction(() => {
this.jwt = jwt_token;
this.isAuthenticated = true;
this.error = null;
localStorage.setItem('jwt_token', jwt_token);
// Decode the JWT to get user info, or make another request
// const decoded = jwt_decode(jwt_token);
// this.userInfo = { id: decoded.sub, roles: decoded.roles };
});
} catch (err) {
runInAction(() => {
this.error = 'Login failed';
this.isAuthenticated = false;
});
}
}
logout() {
this.jwt = null;
this.userInfo = null;
this.isAuthenticated = false;
localStorage.removeItem('jwt_token');
}
checkAuth() {
if (this.jwt) {
// In a production app, you should also validate if the JWT has expired
this.isAuthenticated = true;
}
}
}
export const authStore = new AuthStore();
This MobX store cleanly manages the user’s authentication state and JWT. Additionally, to ensure code quality and consistency in our large frontend project, we adopted Rome as our unified toolchain. It integrates a linter, formatter, compiler, and more. Configuring rules in rome.json enforces a consistent coding style and best practices across the team, reducing unnecessary code review overhead in a collaborative environment.
Architectural Limitations and Future Evolution
This solution, based on an APISIX and Rust plugin, effectively solved our immediate performance problems and enabled a non-intrusive modernization of our backend services. However, it is not a silver bullet and has its own boundaries and limitations.
First, the Rust external plugin process itself becomes a new, critical component to maintain. Although it’s extremely performant, it requires its own monitoring, logging, and alerting systems to ensure its stability. While APISIX manages its lifecycle, in extreme cases, a crash or unresponsive plugin process could still impact service.
Second, the current authentication flow still depends on the backend Ruby service’s /internal/user_info endpoint, which means the logic isn’t fully decoupled. If this endpoint fails, all new login requests will fail. We have merely transformed a performance bottleneck into a high-availability dependency.
The future evolution path is clear: gradually migrate user data and authentication logic out of the Ruby monolith. We can start by using CDC (Change Data Capture) to synchronize user data in real-time to a new, independent database. Once the data synchronization is stable, the Rust plugin can query the new database directly, completely removing the runtime dependency on the old Ruby application. Finally, once all related business logic has been migrated, the old authentication module can be safely decommissioned. Our current architecture paves the way for this ultimate goal, serving as a pragmatic and effective transitional state.