Guide

How to Set Up Error Log Alerts for Your Application

Know the moment something breaks in your application. Get instant error alerts on Slack, Telegram, and Email with context that helps you fix issues faster.

Why You Need Real-Time Error Alerts

Every production application has bugs. The question is not whether errors will happen, but how quickly you find out about them. Traditional logging approaches require someone to actively check log files or a dashboard. By the time an error is spotted in a log aggregation tool, users may have already experienced the issue, filed support tickets, or churned entirely. Real-time error alerts bridge this gap by pushing critical errors directly to your team the moment they occur.

Services like Sentry and Bugsnag do this well, but they cost between $26 and $80+ per month for small teams. With One-Ping, you can build a lightweight error alerting system that sends notifications to Slack, Telegram, or email for a fraction of the cost. It does not replace a full error tracking platform, but for many teams it provides exactly the right level of awareness.

What you will build: An error alerting system integrated into your application's error handler that sends rich, contextual alerts to Slack and Telegram when errors occur, with rate limiting to prevent alert storms during cascading failures.

Step-by-Step Setup

Define Error Severity Levels

Not all errors deserve the same level of attention. Define severity levels that map to different notification channels. Critical errors (application crashes, database connection failures, payment processing errors) should alert everyone immediately via Slack + Telegram + SMS. Warning errors (API timeouts, validation failures, rate limit hits) should go to Slack only. Info-level errors (deprecation warnings, slow queries) should be logged but not alerted in real-time, instead aggregated into a daily digest.

Add One-Ping to Your Error Handler

Every web framework has a global error handler or middleware for catching unhandled exceptions. This is where you add the One-Ping integration. The code below shows how to do this in Express.js (Node.js), Django (Python), and Rails (Ruby). The pattern is the same regardless of your framework: catch the error, extract context (URL, user, stack trace), and send a One-Ping notification.

Configure Alert Channels

Set up your preferred channels in the One-Ping dashboard. For error alerts, the recommended combination is: Slack (a dedicated #errors or #alerts channel for team visibility), Telegram (for the on-call developer's phone), and Email (for a permanent record and for team members who are not on Slack). For critical production errors, also consider adding SMS as a fallback channel.

Add Rate Limiting and Deduplication

When things go wrong in production, they often go wrong repeatedly. A single database connection failure can generate hundreds of identical errors per minute. Without rate limiting, your team will be flooded with notifications and start ignoring them. Implement a simple deduplication strategy: hash the error message and stack trace, and only send a new alert if the same error has not been reported in the last 15 minutes. Include a count of suppressed duplicates when you do send the alert.

Include Error Context

An alert that says "TypeError occurred" is almost useless. Include context that helps your team diagnose the issue without having to log into the server: the error message and type, the stack trace (or at least the first few frames), the request URL and method, the user ID or email (if applicable), the environment (production/staging), the server hostname, and a timestamp. Format this data nicely for each channel using One-Ping's metadata field.

Code Examples

Express.js Error Handler with One-Ping

const crypto = require('crypto');

// Simple in-memory deduplication (use Redis in production)
const recentErrors = new Map();
const DEDUP_WINDOW = 15 * 60 * 1000; // 15 minutes

function shouldAlert(error) {
  const hash = crypto.createHash('md5')
    .update(error.message + error.stack)
    .digest('hex');

  const lastSeen = recentErrors.get(hash);
  if (lastSeen && Date.now() - lastSeen < DEDUP_WINDOW) {
    return false;
  }
  recentErrors.set(hash, Date.now());
  return true;
}

// Express global error handler
app.use(async (err, req, res, next) => {
  console.error(err);

  if (shouldAlert(err)) {
    try {
      await fetch('https://api.one-ping.com/send', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.ONEPING_API_KEY}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          message: `Error in ${req.method} ${req.path}: ${err.message}`,
          channels: ['slack', 'telegram'],
          metadata: {
            slack: {
              attachments: [{
                color: '#ff0000',
                title: `${err.name}: ${err.message}`,
                fields: [
                  { title: 'Endpoint', value: `${req.method} ${req.originalUrl}`, short: true },
                  { title: 'Environment', value: process.env.NODE_ENV, short: true },
                  { title: 'User', value: req.user?.email || 'Anonymous', short: true },
                  { title: 'Server', value: require('os').hostname(), short: true }
                ],
                text: `\`\`\`${err.stack?.substring(0, 500)}\`\`\``,
                footer: `${new Date().toISOString()}`
              }]
            }
          }
        })
      });
    } catch (alertErr) {
      console.error('Failed to send error alert:', alertErr);
    }
  }

  res.status(500).json({ error: 'Internal server error' });
});

Python / Django Middleware

import requests, hashlib, time, os, traceback

# Simple deduplication cache
_recent_errors = {}
DEDUP_WINDOW = 900  # 15 minutes

class OnePingErrorMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        try:
            response = self.get_response(request)
            return response
        except Exception as e:
            self.send_alert(e, request)
            raise

    def send_alert(self, error, request):
        error_hash = hashlib.md5(
            f'{type(error).__name__}{str(error)}'.encode()
        ).hexdigest()

        now = time.time()
        if error_hash in _recent_errors:
            if now - _recent_errors[error_hash] < DEDUP_WINDOW:
                return
        _recent_errors[error_hash] = now

        tb = traceback.format_exc()[:500]
        requests.post(
            'https://api.one-ping.com/send',
            headers={'Authorization': f'Bearer {os.environ["ONEPING_API_KEY"]}'},
            json={
                'message': f'Error in {request.method} {request.path}: {str(error)}',
                'channels': ['slack', 'telegram'],
                'metadata': {
                    'slack': {
                        'attachments': [{
                            'color': '#ff0000',
                            'title': f'{type(error).__name__}: {str(error)}',
                            'text': f'```{tb}```',
                            'footer': f'{os.environ.get("ENV", "production")} | {request.method} {request.path}'
                        }]
                    }
                }
            },
            timeout=5
        )

What to Include in Error Alerts

The most useful error alerts include enough context to start debugging immediately, without needing to log into the server or search through log files. Here is a checklist of what to include:

Error Identity

Error type (TypeError, ConnectionError), message, and a hash or fingerprint for deduplication. This tells you what happened.

Stack Trace

The first 5-10 frames of the stack trace showing which file, function, and line triggered the error. Truncate for readability.

Request Context

HTTP method, URL path, query parameters, and relevant headers. This tells you what the user was doing when the error occurred.

Environment Info

Production/staging/development, server hostname, deployment version or commit hash, and timestamp. This tells you where it happened.

Important: Never include sensitive data in error alerts. Avoid sending passwords, API keys, credit card numbers, or personal health information. Sanitize request bodies and headers before including them in notifications.

Rate Limiting Strategies

A production error alerting system must handle error storms gracefully. Here are three approaches that work well:

Combining with Server Monitoring

Error log alerts complement server monitoring alerts to give you complete visibility into your application's health. Server monitoring catches infrastructure issues (high CPU, disk full, server unreachable), while error alerts catch application-level bugs (null pointer exceptions, failed API calls, database query errors). Together, they cover the full spectrum of things that can go wrong in production.

Best Practices

Ready to catch errors instantly?

Start free with 100 messages/month. No credit card required.

Get started free