Who We Are

We've been on Usenet since the early 2000s. Between the contributors here, we've run our own scrapers, written custom NZB tooling, built and maintained the arr stack across more homelab setups than any of us can count, dealt with par2 corruption at 3am when a 40GB grab came back 96% complete, argued with indexers about API rate limits, and migrated between enough providers to have a firm opinion about what actually matters.

We started newsgroupreviews.co because every review site we kept finding gave the same five providers the top five spots. Those five providers are all owned by the same parent company. Most of those review sites don't mention that, or they mention it once in a footnote. We thought someone should write reviews from the perspective of people who actually track this stuff, run the tools, and know what backbone consolidation means for your completion rates and your wallet.

The team behind this site has hands-on experience with:

  • Building and maintaining the full arr stack: Sonarr, Radarr, Prowlarr, Lidarr, Readarr, SABnzbd, NZBGet. Not "we set it up once." We've tuned priority feeds, written custom post-processing scripts, configured indexer failover, and dealt with the joy of Prowlarr silently dropping an indexer because its API key rotated.
  • Writing our own provider testing tools. Not off-the-shelf. Scripts that pull NZBs from known-date reference posts, attempt completion, and log the results in a format we can compare across providers and across time.
  • Scene group conventions, P2P release naming, obfuscated releases, and the practical difference each one makes to how your downloader handles them.
  • Posting to Usenet, not just downloading. We understand the IHAVE protocol, header path propagation, and what it looks like when two "different" providers show identical article paths because they're running on the same backbone.
  • The technical difference between DMCA and NTD takedown regimes and how each one affects article availability over time. This matters more than most review sites acknowledge.
  • Tracking backbone consolidation through public ASN data, BGP routing tables, provider disclosures, and the practical test of "does this article show up at the same time on provider A and provider B?"

What We Actually Test

Every review on this site is backed by actual testing. Not "we looked at the marketing page and summarized it." Here's what we do.

Completion Rate Sampling

We maintain a reference set of NZBs across four age brackets: releases from the last 30 days, releases from roughly a year ago, releases from five years ago, and releases older than ten years. Each bracket contains a mix of categories. We pull each NZB through the provider under test and log three things: percentage complete on first pass, whether par2 repair succeeded when needed, and whether the final result was usable.

We don't publish per-provider completion percentages. The numbers move too much. A provider that was 99.2% last month might be 97.8% this month because a specific takedown wave hit a category they index heavily. What we do instead is grade reliability over time. If a provider is consistently above 98% on recent content and consistently above 95% on five-year content, that tells us something. If another provider swings wildly between 93% and 99%, that tells us something different. The grades in our reviews reflect these patterns, not a single snapshot.

Speed Testing

We run sustained downloads, not bursty tests. The difference matters. A provider that bursts to 800 Mbit for the first 10 seconds and then drops to 200 Mbit is not the same as a provider that holds 600 Mbit for 20 minutes straight, even though a quick test might make the first one look faster.

We test from both residential gigabit connections and dedicated servers, in both US and EU locations. We watch for the connection ceiling (the point where adding more connections stops helping), sustained throughput over time, and any TCP-level oddities like aggressive window scaling or unexpected resets. When we say a provider "saturates a gig" in a review, we mean it held close to line rate for a sustained multi-gigabyte download. When we say it "tops out around 500 Mbit," that's where the ceiling was regardless of connection count.

We do not publish exact speed numbers because the actual throughput you'll see depends on your ISP, your routing, your hardware, and whether your neighbor is running a torrent swarm. Publishing "Provider X: 847 Mbit/s" would be false precision.

Retention Verification

Providers publish retention numbers on their marketing pages. We don't take those at face value. We verify retention by attempting to retrieve articles from known dates. We use Wikipedia date-referenced posts, Project Gutenberg uploads with known timestamps, and other publicly dated reference material. If a provider claims 5,800 days of retention but we can't retrieve articles from 5,500 days ago, we note the gap.

In practice, most established providers are honest about their retention figures within a reasonable margin. The exceptions get called out in the individual reviews.

Backbone Identification

This is the part most other review sites skip entirely or bury in a paragraph that reads like a disclaimer. We treat it as a first-order ranking criterion.

We identify backbones through a combination of methods: public ASN records and BGP routing data, DNS resolution of the provider's NNTP servers, parent company filings and public corporate disclosures, the provider's own statements about their infrastructure, and our own empirical observation of article propagation. When we notice that articles posted to newsgroup X appear on Provider A and Provider B at the same timestamp down to the second, and this happens consistently, that's a strong signal they're on the same backbone.

Backbone identification matters for two reasons. First, if you're building a multi-provider setup for redundancy, you need actual article path diversity across your providers. Two subscriptions on the same backbone give you one article path, regardless of the brand names on the bills. Your SABnzbd priority groups aren't doing what you think they're doing. Second, and more important as a market issue: providers should disclose their infrastructure. The single most commonly ignored problem on review sites that happen to recommend five providers from the same parent company isn't that sharing backbones is bad. It's that five brands being marketed as independent competitors while sharing one backbone is not disclosed to buyers.

Support Quality

We submit test tickets through each provider's support channels every quarter. Two types: a generic billing-style question ("can I switch from monthly to annual mid-cycle?") and a technical question that requires the agent to actually understand Usenet ("I'm seeing intermittent 430 errors on alt.binaries.* groups from your EU server, can you check the peer path?"). We log response time and whether the response was actually useful or just a canned "please try clearing your cache" reply.

Support matters more than speed tests for most users. A provider that's 50 Mbit slower but responds to a real technical ticket in four hours is better than a provider that's faster but takes three days to send a form letter. Our reviews reflect this weighting.

Payment Verification

We verify that listed payment methods work end-to-end. Not just that they're listed on the signup page. We've found cases where a provider advertises Bitcoin payment but the actual payment flow errors out, or where a listed payment method redirects to a gateway that rejects non-US cards. Those findings go into the reviews.

How We Rank

The ranking criteria, in rough order of weight:

  1. Backbone disclosure and diversity. What infrastructure does this provider actually run on, and do they say so? Shared backbones are common in the industry. What we weight is whether a provider discloses their infrastructure honestly, and whether a user's total provider set achieves actual article path diversity. Five brands on one backbone give you one article path no matter how many you subscribe to.
  2. Technical performance. Completion rates, speed, retention, connection limits. The core service.
  3. Transparency. Does the provider clearly state who owns them? Do they disclose their infrastructure? Do they document their policies?
  4. Support quality. Real response times, real answers. Not marketing promises.
  5. Payment flexibility. More options is better. Crypto with real incentives (not just "we accept it") is a plus. No-expiry block accounts score well.
  6. Pricing-to-value ratio. Not cheapest-wins. We're looking at what you get per dollar, adjusted for everything above.

We don't accept payment for placement. We don't have affiliate relationships with any provider we review. We're not going to claim we're perfectly objective, because nobody is. But we're transparent about our criteria, and we're transparent about the one thing most other review sites aren't: backbone ownership.

Rankings are opinion. We've laid out the criteria above, but two reasonable people could weight them differently. If backbone disclosure doesn't factor into your decision and you just want the cheapest monthly plan with decent speeds, your top three will look different from ours. That's fine. At least you'll be making that choice with the full picture.

The Omicron Consolidation Question

This is the section that makes us different from every other review site we've read.

Omicron Media, operating through Highwinds Network Group, owns Newshosting, Eweka, UsenetServer, Easynews, Tweaknews, Pure Usenet, XLned, and several other "competing" Usenet brands. These brands share the same physical backbone infrastructure. If you subscribe to Newshosting and then add Eweka as a "backup," you're paying two companies that are actually one company, for access to the same articles on the same servers.

This isn't speculation. It's documented through public corporate filings, ASN records, and the practical observation that articles posted to these providers appear simultaneously across all of them. The backbone is the same. The brands are marketing.

Most "best Usenet provider" lists on the web give the top five spots to some combination of Newshosting, Eweka, Easynews, UsenetServer, and Tweaknews. If those lists disclosed the ownership structure, readers would notice that the "top five" are all the same company. Most lists don't.

We mention this in every Omicron-brand review. We explain why backbone disclosure and article path diversity matter for users building multi-provider setups. This isn't an anti-Omicron crusade. Their infrastructure is solid and their retention is competitive. The issue is transparency: marketing five brands as if they compete without disclosing that they share infrastructure. Not quality.

Related

For more on how this consolidation pattern extends to the way r/usenet recommendations work, see our investigation of moderation patterns on r/usenet.

What We Won't Do

  • We won't publish exact completion percentages or speed numbers. They move too much to be honest as static figures.
  • We won't review a provider we haven't personally used. If it's not on the site, we either haven't tested it yet or it wasn't notable enough to justify a page.
  • We won't accept payment for placement. Not now, not later.
  • We won't take down a critical review because a provider asked nicely. If we're wrong about something, we'll correct it. If we're right and they don't like it, the review stays.
  • We won't push a US-optimized provider on a European user just because it ranks higher on our overall list. Context matters, and our "Best for Europe" and "Best for US" listicles exist for a reason.

Corrections and Feedback

We welcome corrections, especially factual ones. If we got a price wrong, a plan feature wrong, or an ownership detail wrong, tell us and we'll fix it. Disagreements about rankings are also welcome. If you think we've been unfair to a specific provider, make the case. We'll read it.

Reach us through the contact page.