Overview

PyMTR network diagnostics

Latest release License Downloads Release pipeline Coverage Trivy Python pip Sphinx

PyMTR is a clean-room Python network diagnostic tool inspired by the classic WinMTR workflow. It combines continuous traceroute-style probing with ping-style metrics in a desktop GUI, an MTR-like command line, and a Rich live terminal UI.

PyMTR is not a port of WinMTR or MTR source code. It is a new MIT-licensed implementation that preserves the validated behavior and user workflow while using Python, Tkinter, and an MTR-like packet helper built around tokenized raw-socket probes.

Critical Development Disclaimer

PyMTR is still under active pre-1.0.0 development.

This application is intended to assist network troubleshooting, but it does not guarantee absolute measurement accuracy, legal-grade evidence, protocol parity with every MTR build, or correctness under every network/security device behavior. Use PyMTR results as supporting technical evidence, compare them with other tools and telemetry, and avoid making critical operational, contractual, financial, or safety decisions from PyMTR output alone.

Features

  • Continuous trace until the user presses Stop.

  • WinMTR-compatible table workflow with MTR-style metrics.

  • Metrics: Hostname, Nr, Loss %, Drop, Recv, Sent, Last, Best, Avg, Worst, StDev, Gmean, Jttr, Javg, Jmax, Jint, LP50, LP95, LP99, JP50, JP95, JP99.

  • DNS resolution keeps the IP address visible.

  • Full-row selection for troubleshooting calls.

  • Selected rows remain highlighted while live results refresh.

  • Column reordering and visible-column selection with persisted settings.

  • Visible insertion indicator while dragging columns.

  • Per-column conditional formatting for numeric metric cells.

  • Per-cell threshold highlighting: only values above the configured column threshold are emphasized.

  • Column resizing with persisted widths.

  • Configurable route mode: static route by default or dynamic route when desired.

  • Temporary per-hop historical series for live troubleshooting during the current trace.

  • Multi-hop details windows with separated hop identity, snapshot metrics, live metrics, factual IPv6 display when available, line charts, metric selection, zoom, pan, horizontal scroll, resize, and tooltips.

  • Per-hop chart export to PNG, JPG, HTML, and PDF.

  • Generate FullReport PDF export with every catalog metric and historical per-hop charts for the current session.

  • TXT, CSV, and HTML export using the same visible columns shown in the table.

  • MTR-like CLI: no arguments opens the GUI, pymtr HOST starts a live TUI, and pymtr --report HOST generates finite reports.

  • Rich-based live TUI using the same column catalog and snapshot values as GUI/exporters.

  • Unix-style command manual in manual.md, with a packageable manpage source under packaging/.

  • Help dialog explaining each metric.

  • Help includes all available metrics, including historical percentile metrics.

  • File menu with clipboard actions, export actions, Generate FullReport, log-folder access, and temp/data-folder access.

  • Keyboard shortcuts with a resizable About > Hotkeys reference dialog.

  • Configurable Tkinter/ttk theme in Options.

  • About menu with resizable License, Help, Hotkeys, Report an Issue, Documentation, Discussions, Issues, and Repository guidance.

  • Optional OpenTelemetry-style debug logging controlled from Options, including every trace cycle and attempted hop.

  • MTR-like subprocess backend with tokenized probes, out-of-order reply handling, and per-probe timeouts.

  • Concurrent one-probe-per-TTL trace cycles, avoiding serialized timeout delays across 30 hops.

  • Branding text loaded from .env: application title, footer, website link, repository URL, and license type.

  • Dedicated Sphinx User Guide and Downloads pages for operational use and release packages.

Metric Reference

All latency and jitter metrics are displayed in milliseconds. The Options interval is configured in milliseconds because it controls how often probe cycles are sent; it does not change the unit used by the result table. Packet loss is a percentage, and packet counters are absolute counts for the current trace session.

Column

Full name

Description

Interpretation

Hostname

Hostname

IP address and resolved DNS name for the hop, when name resolution is enabled. PyMTR keeps the IP visible so evidence remains useful even when reverse DNS names change or are missing.

Use it to identify the device, provider, or network segment represented by the hop. Treat names as hints and IP addresses as the stable troubleshooting reference.

Nr

Hop Number

One-based position of the hop in the path from the local computer to the target for the current trace session.

Use the hop number as the shared reference during troubleshooting calls. Higher numbers are farther from the source, but the final hop is the destination that matters most for end-to-end impact.

Sent

Sent Packets

Number of probes sent to this hop during the current trace session.

Higher Sent values improve confidence in trends. For incident evidence, wait for enough samples to distinguish persistent behavior from a short transient.

Recv

Received Packets

Number of replies received from this hop during the current trace session.

Compare Recv with Sent, Drop, Loss %, and later hops before concluding that packets are lost. A low Recv count early in a trace may only mean the sample is still warming up.

Loss %

Packet Loss

Percentage of completed probes for this hop that did not receive a reply before timeout. It is derived from sent, received, and dropped probe counts.

Loss at the destination or continuing through later hops is usually significant. Loss isolated to one intermediate hop often means that router deprioritizes diagnostic replies while still forwarding traffic.

Drop

Dropped Packets

Absolute count of completed probes that did not receive replies from this hop.

Use Drop with Sent to judge sample size. One dropped probe in a tiny sample is weak evidence; repeated drops over many probes are much stronger.

Last

Last Latency

Most recent round-trip time measured for this hop, in milliseconds.

Use Last to see what is happening now, but do not diagnose from it alone. A single high Last value can be a momentary spike.

Best

Best Latency

Lowest round-trip time observed for this hop, in milliseconds.

Best shows the observed latency floor. If Best is low but Avg, Worst, StDev, or percentiles are high, the path can be fast but unstable.

Avg

Average Latency

Arithmetic mean of received round-trip times for this hop, in milliseconds.

Use Avg as a baseline, but compare it with Worst, StDev, LP95, and LP99 because averages can hide short but operationally painful spikes.

Worst

Worst Latency

Highest round-trip time observed for this hop during the current trace session, in milliseconds.

Worst exposes spikes that can affect voice, video, remote desktop, SSH, database calls, and timeout-sensitive systems. Confirm whether later hops also show the spike.

StDev

Standard Deviation

Statistical spread of latency samples around the average, in milliseconds.

High StDev means unstable latency. Two paths can have the same Avg, but the one with higher StDev will usually feel less predictable to users.

Gmean

Geometric Mean

Geometric mean of positive latency samples, in milliseconds. It is an alternate central tendency that is less pulled by extreme spikes than Avg.

Compare Gmean with Avg. If Avg is much higher than Gmean, a smaller number of high-latency spikes may be pulling the arithmetic mean upward.

LP50

Latency Percentile 50

Nearest-rank median of historical Last latency values for this hop, in milliseconds.

LP50 shows the typical midpoint of the current trace history. It is often more representative than Avg when rare spikes exist.

LP95

Latency Percentile 95

Nearest-rank 95th percentile of historical Last latency values for this hop, in milliseconds.

LP95 shows the latency level that 95 percent of collected samples stayed at or below. It is useful for judging sustained high-latency boundaries.

LP99

Latency Percentile 99

Nearest-rank 99th percentile of historical Last latency values for this hop, in milliseconds.

LP99 highlights rare but important latency spikes. Use it with Worst to separate isolated extremes from broader tail latency.

Jttr

Current Jitter

Absolute difference between the newest and previous latency sample for this hop, in milliseconds.

Jttr shows the newest latency movement. Validate recurring instability with Javg, Jmax, Jint, JP95, or JP99 before escalating based on one movement.

Javg

Jitter Average

Average of observed jitter values for this hop, in milliseconds.

Sustained high Javg indicates recurring latency oscillation and can affect real-time applications even when average latency looks acceptable.

Jmax

Maximum Jitter

Largest observed jump between consecutive latency samples for this hop, in milliseconds.

Use Jmax to identify the worst latency swing in the session, especially when users report intermittent freezes or audio/video glitches.

Jint

Interarrival Jitter

Smoothed jitter estimate based on consecutive round-trip measurements, in milliseconds.

Treat Jint as a smoothed instability indicator for diagnostic probes. It is not the same as one-way application jitter measured by voice or video systems.

JP50

Jitter Percentile 50

Nearest-rank median of historical current jitter values for this hop, in milliseconds.

JP50 shows the typical jitter midpoint in the current trace history. It helps separate normal small variation from recurring instability.

JP95

Jitter Percentile 95

Nearest-rank 95th percentile of historical current jitter values for this hop, in milliseconds.

JP95 highlights sustained jitter risk without being dominated by the single worst sample.

JP99

Jitter Percentile 99

Nearest-rank 99th percentile of historical current jitter values for this hop, in milliseconds.

JP99 is useful for rare jitter spikes that can disrupt real-time traffic. Confirm whether the same pattern continues to later hops or the destination.

Architecture

The canonical architecture documentation is maintained in the root architecture document and is included in the generated Sphinx site as the Architecture page. Keeping detailed diagrams in one source avoids duplicated, conflicting flow descriptions.

Historical Hop Charts

During each trace, PyMTR stores temporary per-hop metric samples in a local SQLite database under data/history. Each process and trace session receives a unique file named like pymtr-history-<pid>-<session-id>.sqlite3, plus a JSON metadata sidecar containing the PID, session ID, target, start time, and PyMTR version. This keeps multiple PyMTR instances isolated from each other.

The database is a runtime buffer only: it is created when a trace starts, kept while the current result is visible, and removed only by the process/session that created it when a new trace starts or the application exits. PyMTR does not perform broad startup cleanup, because another instance may be running. If a user terminates PyMTR abruptly, old history files may remain available for manual cleanup through File > Open Temp Folder.

Double-clicking a hop opens a modeless details window. Multiple hop detail windows can stay open at the same time. Each window shows the hop identity separately from the initial snapshot metrics and the live metrics for that same hop, including the local timestamp when each metric group was collected. The chart uses the local computer timestamp on the X axis, not elapsed time, so troubleshooting notes can be correlated with real clock time.

Chart metrics are grouped as packet/percentage metrics, latency metrics, and jitter metrics. The latest non-empty selection is saved globally for future detail windows and future executions. If all metrics are unchecked, only the current window shows an empty chart; that empty selection is not saved as the next default. Latency and jitter metrics are shown in milliseconds, packet loss is shown as a percentage, and packet counters are shown as counts. The chart is line-only for readability and supports live follow, zoom, pan, horizontal scrolling, resize, and point tooltips. Enabling Live preserves the current zoom width and follows the newest samples.

Export Chart in the hop details window exports the selected hop history as PNG, JPG, HTML, or PDF. File > Generate FullReport creates one PDF for the current instance/session, containing all catalog metrics for each visible hop, one combined all-metrics chart whose legend lists every historical metric label, and one individual chart per metric. The legacy TXT, CSV, and HTML exports remain table-snapshot exports and do not include history.

Main Table Usability

The main results table is a custom Canvas-based metrics grid. It preserves the validated WinMTR-style workflow while allowing cell-level rendering that the standard Tkinter table cannot provide. Dragging a column shows a vertical line where the column will be placed. Right-clicking a numeric column header opens conditional formatting for that column only, where analysts can define normal, warning, and critical text colors using two numeric thresholds. The trigger is evaluated per numeric cell in that column: values at or below the normal limit keep the normal color, values above the normal limit and at or below the medium-impact limit use the warning color, and values above the medium-impact limit use the critical color. The rule does not change metric calculations or exported values.

User Guide

The complete operational guide is maintained in userguide.md and included in Sphinx as the User Guide page.

Running a Trace

Enter a host name or IP address in Host and press Enter or Start. PyMTR continuously probes the route until Stop is pressed. The window title includes the active target while a trace is running.

Use Static route for most troubleshooting. It keeps the first complete route stable while metrics continue to update, making it easier to discuss a fixed hop number with other analysts. Use Dynamic route only when the goal is to observe path changes during the session.

Reading Results

Latency and jitter metrics are in milliseconds. Loss % is a percentage, and Sent, Recv, and Drop are packet counts. Intermediate-hop loss should be interpreted carefully: routers may deprioritize replies to themselves while forwarding traffic normally. Persistent loss continuing into later hops or the destination is more important.

Evidence and Reports

Use TXT, CSV, and HTML exports for quick table snapshots using the current visible columns. Use Export Chart for one hop’s chart. Use Generate FullReport when handing evidence to another team because it captures every metric and full historical charts from the current session.

Logs and Temporary Data

Detailed telemetry is disabled by default because it can generate large files. Enable it from Options only when debugging PyMTR behavior or collecting detailed evidence. File > Open Log Folder opens the debug log location. File > Open Temp Folder opens runtime data, including temporary SQLite history files that may remain after abrupt shutdown.

Development

Use Python 3.14 and a project-local virtual environment only.

py -3.14 -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install pytest sphinx sphinx_rtd_theme myst-parser sphinxcontrib-mermaid Pillow reportlab rich
python -m pip install -e .
python -m pip freeze > requirements.txt

Run tests and documentation:

.\.venv\Scripts\python.exe -m pytest
.\.venv\Scripts\python.exe -m sphinx -b html docs docs/_build/html

The Sphinx documentation uses the Read the Docs theme.

If GNU Make is available, the repository also includes convenience targets:

make test
make coverage
make docs
make build
make cli-help
make cli-version

Versioning

PyMTR uses Semantic Versioning (MAJOR.MINOR.PATCH) for releases. While the application is still pre-1.0.0, minor versions such as 0.7.0 may add features, and patch versions such as 0.7.1 should be reserved for compatible bug fixes. Beta suffixes may be used when a build is explicitly intended for field validation before a stable release.

Run the GUI:

.\.venv\Scripts\python.exe -m pymtr

Run the live TUI or a finite CLI report:

.\.venv\Scripts\python.exe -m pymtr github.com
.\.venv\Scripts\python.exe -m pymtr --report github.com -c 10 --csv github.csv
.\.venv\Scripts\python.exe -m pymtr --help

In a self-contained Windows release, use PyMTR.exe for the desktop GUI and PyMTR-CLI.exe for command-line and live TUI usage:

.\PyMTR.exe
.\PyMTR-CLI.exe github.com
.\PyMTR-CLI.exe --report github.com -c 10 --csv github.csv

Self-Contained Releases

Windows releases are built as a self-contained folder, not as a single-file executable. This keeps the package ready for GitHub Releases now and leaves room for a future installer without changing the application layout.

.\scripts\build-windows-release.ps1

The script runs tests, builds Sphinx documentation, runs PyInstaller in onedir mode, stages a copyable folder, and creates a ZIP:

release\PyMTR-v<version>-windows-x64\
release\PyMTR-v<version>-windows-x64.zip

To run the release build, keep the folder intact and open PyMTR.exe. The folder includes the executable, Python runtime files, Tcl/Tk runtime files, .env, LICENSE, README, generated Sphinx documentation, and a writable logs directory.

The Windows folder also includes PyMTR-CLI.exe for command-line and TUI usage. The source/installable console command remains pymtr; the packaged Windows CLI executable uses a distinct name because PyMTR.exe and pymtr.exe collide on Windows case-insensitive filesystems.

If Windows reports that it cannot access PyMTR.exe, unblock the ZIP before extracting it or run Unblock-PyMTR.ps1 from the extracted release folder. This can happen while the app is unsigned and copied from a browser download, cloud sync folder, USB drive, or network share.

Future Linux and macOS releases should follow the same rule: one platform-specific self-contained folder and one ZIP/TAR archive per platform.

Runtime Notes

PyMTR uses an MTR-like packet helper subprocess for runtime probing. The main GUI process does not send probes directly; it sends text commands to the helper and correlates replies by token. The helper currently supports ICMP, UDP, and TCP probe contracts where the platform allows raw sockets. SCTP is exposed only through the support contract and reports unsupported.

PyMTR intentionally does not fall back to the Windows ICMP API or the native ping command. If raw sockets or the required platform capability are unavailable, the application reports a clear backend error instead of generating degraded or misleading route data. Automated tests use a fake backend and do not require network access.

PyMTR supports two route modes. Static route is the default and freezes the first complete route for the session, keeping the displayed path stable while metrics continue updating. Dynamic route updates displayed hops from the latest probe cycle, which is useful when the goal is to observe path changes.

Debug Telemetry

PyMTR can emit OpenTelemetry-style JSON Lines debug records for live troubleshooting. Detailed debug logging is disabled by default because it can produce large files. Enable it only from the Options dialog and choose the log path there.

Each record includes timestamp, trace_id, span_id, severity_text, body, resource, and attributes fields. Debug events include cycle start/end, packet-helper start/stop, command send, reply receive, probe start/result, hop metrics, backend errors, and asynchronous DNS lookup events.

When detailed debug logging is enabled, the helper also emits structured packet.helper.* diagnostics for probe send, raw receive, matched replies, unmatched replies, socket errors, and per-probe timeouts. In static route mode, alternate responses for frozen hops are logged as route.alternate_response events when debug telemetry is enabled. When probes are sent beyond a discovered route depth, PyMTR logs probe.ignored_after_route_limit instead of applying those probes to visible metrics.

Hotkeys

Shortcut

Action

Enter

Start trace from the Host field.

Alt+F

Open File menu.

Alt+O

Open Options.

Alt+A

Open About menu.

Alt+X

Exit PyMTR.

Ctrl+O

Open Options.

F1

Open Help.

Ctrl+T

Copy the text report to clipboard.

Ctrl+H

Copy the HTML report to clipboard.

Ctrl+Shift+T

Export TEXT report.

Ctrl+Shift+C

Export CSV report.

Ctrl+Shift+H

Export HTML report.

Ctrl+Shift+F

Generate FullReport PDF.

Ctrl+L

Open log folder.

Ctrl+Shift+L

Open temp/data folder.

Ctrl+Q

Exit PyMTR.

Security Scanning

Trivy is not installed or executed locally by PyMTR. Security scanning runs only in GitHub Actions through the official aquasec/trivy:latest container. The workflow is informational for findings and publishes TXT, JSON, SARIF, JUnit, and Markdown summary reports as artifacts, but it does not apply fixes automatically. Unit tests remain the gate for release, documentation, and site publication.

Documentation Site

Sphinx builds the official English documentation, including the Overview, User Guide, Architecture, API Reference, Manual, Release Notes, and Downloads pages. GitHub Actions publishes the generated HTML to GitHub Pages whenever main is updated.

Release Notes

See release_notes.md for user-facing changes grouped by features, fixes, documentation, and security/CI updates.

Tkinter Themes

The Options dialog lists the themes known by the local Tcl/Tk installation. Selecting a theme applies it immediately so the user can see the result before saving.

License

PyMTR is released under the MIT License.