journalctl tail - How to View journalctl Logs Live in Real-Time


Updated on Apr 25, 2025
· 8 mins read
journalctl tail Linux System Administration DevOps Monitoring troubleshooting
journalctl banner

When troubleshooting Linux systems that use systemd, monitoring logs in real-time is essential for identifying and resolving issues quickly. The journalctl command with its -f parameter (short for “follow”) provides this capability, showing logs continuously as they’re generated—similar to the traditional tail -f command but with the powerful filtering capabilities of systemd’s journal. This real-time monitoring lets you observe system behavior as it happens, catching errors and anomalies immediately.

# Basic live log monitoring
journalctl -f

# Live monitoring of specific service
journalctl -u service_name -f

# Example: Monitor the journal daemon itself
journalctl -u systemd-journald.service -f

# Live monitoring with error level filtering
journalctl -f -p err

Summary

  1. Basic Real-Time Monitoring

    # Stream logs as they're generated
    journalctl -f
    
    # Monitor specific service
    journalctl -u nginx.service -f
    
  2. Filtering Options

    # Show only errors
    journalctl -f -p err
    
    # Filter by time
    journalctl --since="1 hour ago" -f
    
    # Limit initial output before streaming
    journalctl -n 20 -f
    
  3. Advanced Filtering and Highlighting

    # Highlight errors in stream
    journalctl -f | grep --color "error"
    
    # Monitor multiple services
    journalctl -u nginx -u mysql -f
    
    # Format as JSON for processing
    journalctl -f -o json
    
  4. Dashboard Integration

    • Export logs to Grafana, New Relic, or Datadog
    • Use promtail to send to Loki for visualization

Streaming Live Logs with journalctl -f and Filtering by Service (-u)

When troubleshooting issues in Linux systems using systemd, watching logs in real-time is invaluable. The journalctl -f command provides a live stream of system logs as they are being written, allowing you to observe system behavior as it happens.

Basic Real-Time Monitoring

journalctl -f
journalctl -f

This command displays new journal entries as they are added to the system journal, continuously streaming until interrupted with Ctrl+C. It’s especially useful during service restarts, system updates, or when troubleshooting intermittent issues.

Filtering by Service

For targeted monitoring, use the -u option to focus on specific services:

# Monitor the SSH service in real-time
journalctl -u ssh -f

# Monitor the Nginx web server in real-time
journalctl -u nginx -f

# Monitor a custom service
journalctl -u my-application.service -f

journalctl -u ssh -f journalctl -u nginx -f

This filtering is particularly valuable when:

  • Deploying new service configurations
  • Troubleshooting service failures
  • Monitoring application behavior under specific conditions
  • Watching for authentication or access issues

Practical Example: Monitoring a Service During Restart

# In terminal 1: Watch the service logs in real-time
journalctl -u nginx -f

# In terminal 2: Restart the service
sudo systemctl restart nginx
Monitoring a Service During Restart

With this approach, you can immediately see any startup issues, configuration problems, or error messages as the service initializes.

Basic Log Viewing: Filtering by Time (--since) and Limiting Output (-n)

While real-time monitoring is powerful, sometimes you need to look at recent events within specific timeframes. Journalctl offers flexible time filtering options that can be combined with real-time following.

Time-Based Filtering

# Show logs since boot and continue monitoring
journalctl --since=today -f

# Show logs since a specific time and continue monitoring
journalctl --since="2025-04-22 09:00:00" -f

# Show logs from the last hour and continue monitoring
journalctl --since="1 hour ago" -f
Time-Based Filtering

The --since parameter accepts various time formats, including relative times like “yesterday”, “2 days ago”, or absolute timestamps.

Limiting Initial Output

When starting real-time monitoring, you can limit how many past log entries are shown before beginning live streaming:

# Show the last 20 lines before starting real-time monitoring
journalctl -n 20 -f

# Show no past entries, only new ones (equivalent to --lines=0 -f)
journalctl --no-tail -f
Show the last 20 lines before starting real-time monitoring

Combining Time Filters with Service Filters

You can combine these filters for precise monitoring:

# Show nginx logs since the last boot and continue monitoring
journalctl -u nginx --since=boot -f

# Show SSH logs from the past 30 minutes and continue monitoring
journalctl -u ssh --since="30 min ago" -f
 Show SSH logs from the past 30 minutes and continue monitoring

This approach helps focus your troubleshooting on relevant time periods while maintaining real-time awareness.

Efficient Log Searching and Highlighting with grep

While journalctl has built-in filtering capabilities, combining it with grep allows for more powerful pattern matching and visual highlighting of important information.

Basic Pattern Matching

# Highlight error messages in real-time logs
journalctl -f | grep --color "error"

# Highlight multiple patterns
journalctl -f | grep --color -E "error|warning|critical"

Case-Insensitive Searching

# Match "error", "ERROR", "Error", etc.
journalctl -f | grep --color -i "error"

Context Lines

Show lines before and after matches to get more context:

# Show 2 lines before and after each match
journalctl -f | grep --color -A 2 -B 2 "failed"

Inverting the Match

Sometimes it’s useful to exclude certain noisy log entries:

# Show all logs except those containing "periodic"
journalctl -f | grep --color -v "periodic"

Practical Example: Monitoring Authentication Failures

# Monitor for failed login attempts in real-time
journalctl -f | grep --color -E "Failed password|authentication failure|invalid user"

This command is particularly useful for monitoring security-related events across all services in real-time.

Monitoring Multiple Services Simultaneously

When troubleshooting complex issues involving interactions between multiple services, you need to watch logs from multiple services at once.

Using Multiple -u Options

# Monitor both web server and database logs simultaneously
journalctl -u nginx -u mysql -f
Monitor both web server and database logs simultaneously

This command combines logs from both services in a single chronological stream, making it easier to spot cause-and-effect relationships.

Monitoring Service Groups

You can also monitor all services that match a pattern:

# Monitor all Apache-related services
journalctl -f _SYSTEMD_UNIT=apache*

# Monitor all container services
journalctl -f _SYSTEMD_UNIT=*container*

Monitoring Service Dependencies

During complex troubleshooting, you might need to monitor a service and all its dependencies:

# First, identify dependencies
systemctl list-dependencies my-service

# Then monitor both the service and its critical dependencies
journalctl -u my-service -u dependent-service1 -u dependent-service2 -f

Practical Example: Web Application Stack Monitoring

# Monitor the entire web application stack in real-time
journalctl -u nginx -u php-fpm -u redis -u postgres -f
Monitor the entire web application stack in real-time

This approach provides a comprehensive view of your application’s behavior across multiple components.

Highlighting Logs from Multiple Services

When monitoring multiple services simultaneously, it can be challenging to quickly identify which logs belong to which service. Custom highlighting can solve this problem.

Using grep and ANSI Color Codes

# Different colors for different services
journalctl -u nginx -u mysql -f | grep --color=always -E 'nginx|mysql|$' | \
sed 's/nginx/\x1b[36mnginx\x1b[0m/g; s/mysql/\x1b[33mmysql\x1b[0m/g'

Using the ccze Tool

The ccze utility provides automatic colorization of logs:

# Install ccze first: sudo apt install ccze (Debian/Ubuntu)
journalctl -u nginx -u mysql -f | ccze -A
Using the `ccze` Tool

Service-Specific Terminals

For an even clearer separation, you can use multiple terminal windows with unique filters:

# Terminal 1
journalctl -u nginx -f

# Terminal 2
journalctl -u mysql -f

This approach provides complete separation while still allowing you to observe timing relationships between events.

Exploring Output Formats: JSON and Beyond

Journalctl supports various output formats that can be useful for both human reading and programmatic processing.

JSON Output for Programmatic Processing

# Stream logs in JSON format
journalctl -f -o json
JSON Output for Programmatic Processing

This format is particularly useful for piping into tools like jq for further processing:

# Stream logs in JSON format and filter fields with jq
journalctl -f -o json | jq 'select(.PRIORITY=="3") | {time: ._SOURCE_REALTIME_TIMESTAMP, msg: .MESSAGE}'

Other Useful Output Formats

# Verbose output with all fields
journalctl -f -o verbose

# Short output format (default)
journalctl -f -o short

# Short-precise format with microsecond timestamps
journalctl -f -o short-precise

# Only the message field
journalctl -f -o cat
Short output format

Customizing Timestamp Format

For real-time monitoring with custom timestamps:

# Show ISO 8601 format timestamps
journalctl -f --output=short-iso

# Show RFC 3339 format timestamps
journalctl -f --output=short-precise
Show RFC 3339 format timestamps

Practical Example: Creating a Custom Monitoring View

# Custom monitoring view with precise timestamps and message prioritization
journalctl -f -o short-precise | \
  grep --color=always -E 'error|warning|critical|$' | \
  sed 's/error/\x1b[31mERROR\x1b[0m/g; s/warning/\x1b[33mWARNING\x1b[0m/g; s/critical/\x1b[35mCRITICAL\x1b[0m/g'

This command provides a clean real-time log view with highlighted priorities and precise timestamps.

Exporting Logs to Dashboards: Visualizing with Grafana, New Relic, and Datadog

Real-time log monitoring in the terminal is helpful for immediate troubleshooting, but for ongoing monitoring, log dashboards provide better visualization and alerting capabilities.

Sending Logs to Grafana Loki

Grafana Loki is a lightweight log aggregation system that pairs well with Grafana for visualization:

# Install promtail to ship logs to Loki
sudo apt install promtail

# Configure promtail to read from journal
# Example promtail config that sends journal logs to Loki:
server:
  http_listen_port: 9080
positions:
  filename: /var/lib/promtail/positions.yaml
clients:
  - url: http://loki:3100/loki/api/v1/push
scrape_configs:
  - job_name: journal
    journal:
      max_age: 12h
      labels:
        job: systemd-journal
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'

Integrating with Datadog

Datadog provides comprehensive monitoring including log management:

# Install the Datadog agent
DD_API_KEY=your_api_key bash -c "$(curl -L https://raw.githubusercontent.com/DataDog/datadog-agent/master/cmd/agent/install_script.sh)"

# Configure the agent to collect journal logs

Add to /etc/datadog-agent/conf.d/journald.d/conf.yaml:

logs:
  - type: journald
    service: "journald"
    source: "systemd"

Setting Up New Relic Integration

New Relic offers infrastructure and log monitoring solutions:

# Install the New Relic infrastructure agent
curl -Ls https://download.newrelic.com/install/newrelic-cli/scripts/install.sh | bash
sudo NEW_RELIC_API_KEY=your_key NEW_RELIC_ACCOUNT_ID=your_account_id /usr/local/bin/newrelic install -n logs-integration

Practical Example: Monitoring Critical Services with Grafana

  1. Set up Grafana and Loki using Docker:
docker-compose up -f loki-docker-compose.yaml
  1. Configure Promtail to collect and tag logs from critical services
  2. Create a Grafana dashboard with panels for:
    • Error rate by service
    • Log volume trends
    • Top error messages
    • Authentication failures

This setup provides both real-time monitoring and historical analysis capabilities.

Conclusion

Real-time log monitoring with journalctl -f is an essential skill for Linux system administrators and DevOps engineers. By combining the commands and techniques discussed in this guide, you can:

  • Quickly identify and troubleshoot issues as they happen
  • Filter logs to focus on relevant services and severity levels
  • Customize your view with highlighting and formatting
  • Export logs to dashboards for long-term monitoring and analysis

The ability to effectively monitor logs in real-time can drastically reduce troubleshooting time and help maintain system stability. Remember that different situations call for different monitoring approaches—sometimes a simple journalctl -f is sufficient, while complex issues might require sophisticated filtering and visualization.