Skip to content

Rate-Limit Audit + `@nestjs/throttler` Implementation Plan

Rate-Limit Audit + @nestjs/throttler Implementation Plan

Section titled “Rate-Limit Audit + @nestjs/throttler Implementation Plan”

Date: 2026-04-21 Status: Analysis done — implementation DEFERRED post-MVP0 Owner: Claude (BE) Ecosystem rule: this plan picks @nestjs/throttler per our ecosystem-plugin-preference rule — NestJS project → reach for @nestjs/* first.


MVP 0 is pre-revenue, closed alpha. Attack surface:

  • Bot-registered Clerk accounts → Clerk handles bot detection (hCaptcha built-in)
  • Scraping professional listings → data is public anyway (SEO goal post-MVP0)
  • SMS pumping via /send-otp → Clerk handles SMS throttle server-side
  • Brute-force auth → Clerk handles (verifyToken rate-limits)

Actual MVP0 risk: none high enough to block launch. Rate-limit becomes load-bearing when:

  • Public API endpoints (unauthenticated search, category listing) exposed to web crawlers
  • Costs scale on request volume (Mapbox geocode, Novu notification triggers)
  • DDoS surface grows with traffic

Plan this now, land post-demo when we have time for the BullMQ + Redis integration testing it deserves.


Surface-level map — endpoints that WILL need limits

Section titled “Surface-level map — endpoints that WILL need limits”
EndpointCurrent limitNeeded limitReason
POST /auth/login (proxied Clerk)Clerk defaultOK — Clerk handlesBrute-force, handled upstream
POST /auth/signupClerk defaultOK — Clerk handlesBot signup, handled upstream
POST /bookingsnone10/min per userSpam bookings, trolling pros
POST /sos/requestnone3/hour per userFalse emergency abuse
POST /messagesnone60/min per userChat spam, harassment
POST /reviewsnone5/day per userReview bombing
POST /credentials/me/:id/upload-url10/day✅ already via RedisAlready implemented — MIGRATE to throttler
POST /webhooks/clerk, /webhooks/stripenonenone — webhooks are IP-filteredsvix + Stripe sign, source IP verified
GET /search, GET /professionalsRedis cache 30sCache already buffers; add 60/min per IPScraping
POST /ai/parse-job (LangGraph)none20/hour per userLLM token cost
POST /media/presignnone20/hour per userUpload flood → R2 cost
GET /geocode/* (Mapbox proxy)none30/min per userMapbox req cost (100K/mo free)

Total new policies: ~10 endpoints, tiered by sensitivity.

ClassRuleEndpoints
authStrict5/min per userlogin adjacent, password-reset
userWrite10-20/min per userbooking, review, credential actions
userRead60/min per IPsearch, listings
costly20/hour per userAI parse, upload presign, geocode

Implementation plan — @nestjs/throttler v7+

Section titled “Implementation plan — @nestjs/throttler v7+”
Terminal window
pnpm --filter @ideony/api add @nestjs/throttler
apps/api/src/app.module.ts
import { ThrottlerModule, ThrottlerGuard } from '@nestjs/throttler';
import { APP_GUARD } from '@nestjs/core';
@Module({
imports: [
ThrottlerModule.forRoot([
{ name: 'authStrict', ttl: 60_000, limit: 5 },
{ name: 'userWrite', ttl: 60_000, limit: 20 },
{ name: 'userRead', ttl: 60_000, limit: 60 },
{ name: 'costly', ttl: 3_600_000, limit: 20 },
]),
// ...
],
providers: [
{ provide: APP_GUARD, useClass: ThrottlerGuard },
// ...
],
})

Default in-memory storage won’t work behind multi-replica deploy. Use @nest-lab/throttler-storage-redis (Redis-backed official companion).

apps/api/src/common/throttler.config.ts
import { ThrottlerStorageRedisService } from '@nest-lab/throttler-storage-redis';
ThrottlerModule.forRootAsync({
inject: [ConfigService],
useFactory: (config: ConfigService) => ({
throttlers: [ /* classes above */ ],
storage: new ThrottlerStorageRedisService(config.get('REDIS_URL')),
}),
});

Step 4 — tracker override (user-ID based, fall back to IP)

Section titled “Step 4 — tracker override (user-ID based, fall back to IP)”

Default tracker = IP. We want user-ID tracker post-auth, IP pre-auth.

apps/api/src/common/guards/app-throttler.guard.ts
@Injectable()
export class AppThrottlerGuard extends ThrottlerGuard {
protected async getTracker(req: FastifyRequest): Promise<string> {
const userId = (req as any).user?.id;
return userId ?? req.ip; // user when authed, IP for anon
}
}

Wire via APP_GUARD instead of the default ThrottlerGuard.

@Controller('bookings')
export class BookingsController {
@Post()
@Throttle({ userWrite: { limit: 10, ttl: 60_000 } })
async create() { /* ... */ }
}
@Controller('sos')
export class SosController {
@Post('request')
@Throttle({ userWrite: { limit: 3, ttl: 3_600_000 } })
async request() { /* ... */ }
}
@Controller('webhooks/clerk')
@SkipThrottle()
export class ClerkWebhookController { /* ... */ }
@Controller('health')
@SkipThrottle()
export class HealthController { /* ... */ }

Default ThrottlerException returns 429 w/ plain message. Wrap to:

  • Return Retry-After header (ms until reset)
  • Translate message via nestjs-i18nt('errors.rate_limited', 'Too many requests. Try again later.')
  • Log to Sentry with tag: rate_limit_hit
apps/api/src/common/guards/app-throttler.guard.ts
protected async throwThrottlingException(
ctx: ExecutionContext,
throttlerLimitDetail: ThrottlerLimitDetail,
): Promise<void> {
const req = ctx.switchToHttp().getRequest();
const res = ctx.switchToHttp().getResponse();
res.header('Retry-After', Math.ceil(throttlerLimitDetail.timeToExpire / 1000));
throw new ThrottlerException(t('errors.rate_limited'));
}

Step 8 — migrate existing credential-upload limiter

Section titled “Step 8 — migrate existing credential-upload limiter”

CredentialsService currently hand-rolls Redis incr + expire for 10/day limit. Replace with @Throttle({ costly: { limit: 10, ttl: 86_400_000 } }) decorator on the endpoint. Delete the manual implementation in the service — the throttler guard runs before the service, so this is pure cleanup.

TestLayerLocation
AppThrottlerGuard tracker logicUnitapps/api/src/common/guards/app-throttler.guard.spec.ts
429 response shape + Retry-After headerAPI integapps/api/test/integration/rate-limit.spec.ts
Per-endpoint limits (booking, SOS, upload)API integSame file
Redis storage key isolation (tenant/user)API integSame file
Skip list (webhooks, health)API integSame file

Plus E2E coverage — see docs/specs/2026-04-21-e2e-strategy.md §5 rate-limit row (GAP → closes with this work).

Add Sentry breadcrumb on 429:

Sentry.addBreadcrumb({
category: 'rate_limit',
message: `429 ${req.method} ${req.url}`,
data: { tracker, class: throttlerClass, limit, ttl },
});

Dokploy Grafana board — add rate_limit_hit counter metric via @willsoto/nestjs-prometheus once monitoring stack lands.


~0.5 dev day — mostly decorators + tests. Redis storage + config ~2h, test matrix ~2h.

  • Post-cofounder-demo (MVP0 + 1-2 weeks)
  • After @nestjs/throttler v8 check on Context7 (lib evolves fast)
  • Land in same sprint as webhook + i18n E2E closure (M3 of E2E spec) — all three are API-integ test additions
  • ADR: docs/decisions/00XX-nestjs-throttler-rate-limit.md documenting tiering choices