Zum Inhalt springen

🔄 Real-Time Cache Refresh Using Azure Queue (Without Redis, Service Bus, or Pub/Sub)

🚀 A Lightweight Cache Refresh Mechanism with Azure Queue (No Redis, No Service Bus!)
💡 The Problem

In our application, data was stored in the database, but not updated frequently.
To improve performance, we wanted this data cached across multiple app instances.

The challenge:

Data changes occasionally (Insert, Update, Delete in Azure DB).

Whenever data changes, all app instances should refresh their cache.

We wanted to avoid Redis, Pub/Sub, or Service Bus and stick with Azure Queue Storage.

⚙️ The Solution

Here’s the approach I designed:

Cache Layer: Store the DB data in memory cache (per app instance).

Trigger on Change: When a DB operation (insert/update/delete) occurs, send a message to an Azure Queue with a TTL (Time-to-Live).

Background Listener: Each instance runs a lightweight listener that:

Periodically checks the approximate message count in the queue.

If count > 0 → Refreshes local cache.

Message Cleanup: Since ApproximateMessagesCount is not real-time, each sender also schedules a DeleteMessageAsync after a delay, ensuring the message is eventually cleaned up.

This ensures:

All instances refresh cache consistently.

Queue acts like a broadcast signal, not a transport layer.

No dependency on Redis or Service Bus.

🔄 Flow Diagram
flowchart TD
A[DB Change: Insert/Update/Delete] –> B[Send message to Azure Queue (with TTL)]
B –> C[Background Listener reads Approximate Message Count]
C –>|Count > 0| D[Refresh Local Cache]
D –> E[Mark message for delayed DeleteMessageAsync]
E –> F[Cache Synchronized Across Instances]

📌 Key Points

Uses Azure Storage Queue only (no Service Bus, no Redis).

ApproximateMessageCount drives the decision, but cleanup ensures correctness.

Messages act as cache refresh signals, not data carriers.

Works well in multi-instance deployments.

✅ Benefits

Lightweight – no extra infra needed.

Cost-effective – Azure Queue is cheap.

Resilient – each instance eventually refreshes even if messages are delayed.

🔮 Future Improvements

Use custom message metadata (e.g., message type, source host).

Replace ApproximateMessageCount polling with event-based push if scale increases.

Add dead-letter queue for failed refresh attempts.

✍️ Final Thoughts

This approach is perfect for scenarios where:

Data changes are infrequent.

You want shared cache consistency across multiple instances.

You don’t want to introduce Redis, Pub/Sub, or costly Service Bus.

Additional Use Case: Configuration Management

This approach isn’t limited to cache refresh.
Imagine you have an application deployed across multiple environments or instances where only configuration values change from time to time.

Traditionally:

You’d need to redeploy apps or push configs manually.

With this pattern:

Store config values/keys in the database.

On config update → send a message to the queue.

All instances refresh their in-memory config cache automatically.

👉 Result: No redeployments needed for simple config changes.

👉 If you liked this idea, follow me here on Dev.to and let’s discuss how you handle cache refresh in your systems!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert