
Troubleshooting Display and Audio Issues on Linux: Browser-Based Tools That Actually Help
Linux hardware troubleshooting follows a predictable pattern. Something doesn’t work — the 144Hz monitor reports 60Hz, the Bluetooth headset produces sound but the microphone is dead, a fresh install shows a stuck pixel nobody noticed on the showroom floor — and you immediately drop into the terminal to investigate. xrandr, pactl, evtest, inxi. Half the time the output is fine but the actual hardware behavior is still wrong, because the component you’re checking is downstream of whatever the OS reports.
For those cases, browser-based diagnostic tools fill a specific gap that CLI utilities can’t. They test what the hardware is actually doing in the context of the rendering pipeline or audio path your applications use. No package to install, no daemon to restart, no sudoers edits required. Just a URL and a working browser. This is a walkthrough of the display, audio, and input checks worth bookmarking on any Linux workstation — particularly after a fresh install or a major driver change.
Why browser-based diagnostics make sense on Linux
The Linux hardware stack is layered. A refresh rate issue could live in the driver (NVIDIA proprietary, nouveau, amdgpu), the display server (X11 or Wayland), the compositor (Mutter, KWin, Sway, Hyprland), the DRM subsystem, or the monitor’s EDID itself. CLI tools query a specific layer. Browser tools show you the result of the whole stack combined — exactly what GTK applications, Electron apps, and web content are receiving.
The distribution-agnostic angle also matters. A test that runs in Firefox works identically on Arch, Debian, Fedora, NixOS, and anything in between. There’s no AUR package to maintain, no .deb to find, no Flatpak sandbox weirdness to troubleshoot. For troubleshooting guides that need to be reproducible across distros and desktop environments, browser-based is the lowest common denominator. And for users running ephemeral systems — live USBs, immutable distros like Fedora Silverblue or Vanilla OS, containers — not having to install anything is the difference between a 30-second check and a 30-minute detour.
Display troubleshooting: the common suspects
Display issues on Linux fall into four recurring categories: wrong refresh rate detected, ghosting on VA or older IPS panels, dead or stuck pixels after fresh hardware, and color calibration shifts after using gammastep or redshift. Each has a quick browser test.
Dead pixel scans are worth running on any new laptop or external monitor, regardless of brand. Fresh pixel defects on day one are warranty-eligible at most retailers. The test method is straightforward: cycle full-screen solid colors (black, white, red, green, blue) and visually inspect the surface for anomalies that persist across background changes. A stuck pixel shows as a single bright dot on dark backgrounds; a dead pixel shows as a dark dot on bright ones.
Refresh rate verification is the one most likely to surprise you. Wayland sessions on NVIDIA hardware have a history of reporting one refresh rate in xrandr/wlr-randr output while actually delivering another to the framebuffer. Multi-monitor setups with mixed refresh rates (one 60Hz, one 144Hz) are particularly prone to incorrect detection, with both displays sometimes defaulting to 60Hz after a compositor restart. An in-browser refresh rate checker measures the actual frames-per-second being delivered to the browser canvas — which is downstream of the driver, compositor, and GPU scheduler combined. If the browser reports 60Hz on a display you configured for 144Hz, you know the issue is somewhere in the rendering pipeline, not in a misunderstood xrandr command.
Ghosting tests matter for anyone who’s switched to a new monitor and suspects pixel response time isn’t living up to the spec sheet. Fast-moving content produces visible trailing on panels with slow response times, even when the refresh rate is correct. This isn’t a Linux-specific issue — it’s hardware — but it’s worth confirming before you open a kernel bug report or start tweaking overdrive settings in the monitor OSD.
Audio troubleshooting: cutting through the PipeWire transition
Linux audio in 2026 is in a long transitional state. Most mainstream distributions have moved to PipeWire with pipewire-pulse compatibility, but plenty of setups still run pure PulseAudio, and a few holdouts configure ALSA directly. Bluetooth audio works better than it used to, but getting a new Bluetooth headset’s microphone to register often requires a dance of switching between HSP/HFP and A2DP profiles.
When audio goes sideways, the first question is always: does the hardware actually work, or is this a software routing issue? A browser-based speaker test plays a controlled tone through the default audio output and through individual left/right channels. If the tone plays cleanly through the browser but not through your music player, the issue is in application-level routing or sink selection, not the hardware. If it doesn’t play through the browser either, you’re looking at a lower-level driver or device problem — time to open pavucontrol or wpctl.
Microphone testing follows the same principle. USB audio interfaces, Bluetooth headsets, and integrated laptop microphones all expose themselves to the browser through the MediaDevices API. If the browser can capture audio from the device, WebRTC applications (Zoom, Google Meet, Jitsi) will be able to as well — the failure mode upstream of that is almost always permission-related. If the browser can’t capture, the hardware or the kernel-level device node is the problem. A two-minute microphone test in the browser eliminates half the diagnostic tree immediately.
Stereo channel verification is worth running after any major audio config change. Front-left and front-right swapped is surprisingly common after migrating from PulseAudio to PipeWire on setups with non-default channel maps. A simple stereo tone test (left tone only, then right tone only) confirms channel order in about 15 seconds.
Input devices: custom layouts, polling rates, and ghosting
Linux users tend to customize keyboard layouts more than average. Custom xkb configurations, international layout switchers, Kinesis and other alternative keyboards, split mechanical builds with custom firmware — all of this creates situations where a specific key input isn’t landing where the user expects it to land.
Browser-based keyboard tests display the keycode, key, and code values fired by each keypress, which is exactly what JavaScript applications, Electron-based editors, and games receive. An online keyboard test visualizes every key event as it fires, which makes xkb config debugging massively faster than parsing setxkbmap output or grepping through evtest logs. If a key fires the wrong code in the browser test, the issue is in the layer between kernel input events and the browser — typically xkb configuration, evdev remapping, or compositor-level key grabs. If the key fires correctly in the browser but wrong in a specific application, you know the problem is application-level (hotkey conflicts, xdotool mappings, Wayland key grab issues).
For mechanical keyboard enthusiasts, N-key rollover tests verify that simultaneous keypresses register correctly. For gamers and stream deck users, polling rate and click speed tests help diagnose laggy input on some USB hubs or docks. None of these are Linux-exclusive concerns, but Linux users have more customization layers that can silently break input — which makes quick verification more valuable.
Display calculations for HiDPI and fractional scaling
HiDPI on Linux has been “almost solved” for about a decade. Integer scaling (1x, 2x) works well in most environments. Fractional scaling (1.25x, 1.5x) still produces visible issues in XWayland apps, Electron applications, and anywhere outside native GTK4/Qt6 rendering pipelines. Getting the scale factor right requires knowing the actual pixel density of your display, which most spec sheets don’t report directly — they give diagonal inches and resolution.
A PPI calculator takes screen diagonal and native resolution, returns pixels-per-inch. For a 27-inch 4K monitor that’s about 163 PPI; a 32-inch 4K is 138 PPI. The rough rule for Linux: below 120 PPI, 1x scaling works fine. 120–180 PPI, 1.25x or 1.5x is typical. Above 180 PPI, 2x is usually right. Aspect ratio calculators similarly help when configuring multi-monitor layouts or when working with video content that needs specific framing.
These aren’t troubleshooting tools in the same sense as the display and audio checks above, but they belong in the same utility category: browser-based, distribution-agnostic, no install required. Keeping them bookmarked alongside the diagnostic tests turns “what’s the right scale factor for this laptop” from a 10-minute research detour into a 15-second calculation.
A complete post-install diagnostic checklist
After every fresh Linux install, distro-hop, or major driver update, this sequence catches most hardware-layer issues before they become mysterious bugs later. Total time: 10–15 minutes.
- Dead pixel sweep on every display. Cycle full-screen colors on each monitor. New laptops and fresh external monitors occasionally ship with defects. Document and return anything you find within the retailer window.
- Refresh rate verification on high-refresh panels. Confirm the browser reports the refresh rate you configured. NVIDIA + Wayland combinations are the most common source of silent downgrades.
- Audio output test per channel. Front-left-only tone, front-right-only tone, center tone. Confirm channel mapping is correct and both speakers fire cleanly.
- Microphone input capture. For every audio input device (integrated mic, headset, USB interface), verify the browser can capture audio. If it can, WebRTC applications will work.
- Keyboard layout verification. Particularly important after setxkbmap changes, custom xkb configs, or layout switches. Test the keys you care about with a key event test and verify they produce the expected codes.
- Fractional scaling check. Calculate the actual PPI of each display. Verify your scale factor matches. Test a handful of Electron apps (VS Code, Discord, Element) — these are often the canaries for fractional scaling issues.
- Color uniformity across displays. If you’re running a color-corrected workflow with gammastep or redshift, verify white points and color temperature are consistent across all displays after any config change.
- Ghosting test on motion-heavy content. If you switched panels or changed driver versions, verify motion clarity is what you expect. Inverse ghosting from over-aggressive overdrive is worth knowing about before a long gaming or video session.
See also: How Airoom Uses Technology to Improve Design and Build Accuracy
When to reach for browser tools vs CLI utilities
Both have their place. CLI utilities tell you what the system thinks is happening — xrandr reports what the X server believes about display configuration, pactl lists the sinks and sources PulseAudio has configured, evtest dumps kernel input events. Browser tools tell you what the rendering and capture pipelines actually deliver to application code, which is a different question.
When xrandr and the actual display behavior disagree, browser tools cut through the confusion immediately. When audio routes correctly in pactl but applications still can’t find the device, a browser-based speaker test confirms whether the issue is in the default sink selection or in the application’s own audio configuration. The two categories are complementary, not competing. For a Linux workstation, having both available — CLI for system-level diagnostics, browser-based for end-to-end verification — turns most hardware troubleshooting into a 5-minute sequence instead of a 2-hour dive into logs.
Aggregated hubs like testshub.io collect these diagnostic tools in one place, which reduces the friction of context-switching during troubleshooting. Bookmark the hub, remember the categories (display, audio, input, calculator), and the specific tool you need is always one click away — which matters more than it sounds like it should when you’re already three commands deep into a kernel log and don’t want to lose context searching for the right URL.
Wrap-up
Linux hardware troubleshooting is mostly about finding where in the stack the actual failure is living. CLI utilities tell you what each layer thinks is happening; browser-based tests tell you what the combined result of all layers is delivering to applications. The two together narrow the diagnostic surface fast. Bookmark the tools that match your common failure modes, integrate them into your post-install checklist, and save future-you the hour-long detour when a refresh rate silently drops or a Bluetooth microphone refuses to register.
Hardware diagnostics don’t need to be complicated. Most of the time the test takes longer to describe than to run.



