I can’t seem to turn this part of my brain off that is always looking to tinker with any computer/electronics stuff, and that includes websites like real-debrid.com which I’ve been a very happy customer with for some time now. In taking advantage of my new hosting plan which now offers*“unlimited bandwidth + unlimited storage”, I decided this morning I would put it to the test by compile a copy of*aria2con the backend, pullall**of my files stored in the cloud from my real-debrid account, and let the process run as long as need be behind ascreeninstance while I go about my day.
I’ve been backing my seedbox server files up daily as a result of said unlimited storage - to about three TB already - so it’ll be interesting to see how this all plays out. In any event, while I was waiting for aria2c to compile and with my real-debrid account pulled up, I came to notice a few things about the HTTPS URL which is used to access my RD cloud storage folder.
I guess I should explain whatReal-Debridis for anyone unaware. RD is a paid service which partners with many popular filehosting companies (think megaupload, rapidshare, etc), and by paying an insanely small fee, you can unrestrict typical limits you’d have put on you as a free user. Beyond that,they cache all torrent files on their CDN- This means, if you want to download a 500GB torrent and someone else already has, it’s likely cached on their servers already so you can download from that cached copy at speeds sometimes exceeding 100MB/sec. I estimate I also have at any given time upwards of 2TB cache I’ve downloaded in my folder, so in a way you’re getting a ton of storage space too. Seriously, if you know how to use it, this service is amazing.
These cached resources are exactly what I’m talking about when I’m mentioning backing upo all my RD cloud storage. There are two ways to access these cached files under your account:
- Mount the resource as a webdav share, which includes authentication in the form of user/pass combo on a specific port
- Standard HTTPS URL with no authentication, however the root folder for the resourceis the same as the password issued for the webdav authentication
The first question that comes to my mind isif the resource via HTTPS is accessible without authentication and merely a tough folder to guess, there is a chance that a search engine like google may have crawled some folders over the past decade or so. I decided to dry some google dorking.

I. Google dorking for some insight1
What does this mean?
- My initial guess was correct, and the mere fact that you can download other peoples cached files in the first place finalizes it.
- I was able to initiate downloads to my webhost account from these HTTPS links using aria2c and no authentication/cookies. These caches are accessible to the world.
- This is further backed by finding a cache of google indexing a users cloud storage.
There is no service to search RD cloud cache. I feel like if there was, it would open their company up to a lot of trouble when it comes to pirating software, movies, etc. Perhaps the way they resolve this is withplausible deniabilityshould any trouble come their way, easily proving they’ve obfuscated the files, have no main search engine to maintain awareness of cached content, and have done their due dilligence in making it difficult to achieve. I mean, I can only imagine I’m not the first one who’s understood the little data goldmines each user is sitting on.
It’d sure be nice to be able to parse it..
Deeper Analysis
Lets start with the URL:
- Served over https via “https://my.real-debrid.com/?????????????/“
- Root folder is limited to characterset: [A-Z][0-9]
- Root folder is likely always 13 characters in length (my personal one, the indexed one on google)
- The ????????????? portion is equal to the webdav password. Accessing the webdav resource requires authentication as user:pass
I am able to, as mentioned, pull files from the folders if I have the filename properly, and if I instructmy downloader to follow redirects. Without following redirects, many files will 404 out. This indicates that there is some process or protocol with how and which servers they are stored on. Interestingly enough, this 13 uppercase alphanumeric obfuscation is used throughout. One of the files I attempted to download, and the reason I even became aware of the redirect issue, showed as available from my account. I forgot to add the**-Lswitch while using curl, and was unable to download it. Noticing my mistake, I applied the switch and was redirected to another server - subdomain of real-debrid. This time, the location looked more like“https://dawn3.real-debrid.com/d/?????????????/filename“. The main change being the baseurl addition at/d/**.
Now I have no intention of poking around much more than this, I don’t want to get into running even passive scans at tehir services or holes in their systems. But I can say from a moral standpoint, considering the caches are public, the folders may contain the webdav password but no indication as to what user account goes with it - meaning there really is no harm done in trying to find these cache resources - wether it be by google dorking, or crawling, or passively fuzzing!
The Real-Debrid Fuzzer
I whipped this up earlier this morning. It’s functional, for sure. I’ve tested it against known locations and unkown ones, and logged where you could get potential false negatives depending on the URL structure. Takes two parameters, the first being # of fuzz attempts and the second being the time in seconds between each attempt.You can expect this to almost exclusively return an 403 code. This seems to be the way real-debrid handles attempts to navigate to non existent files or folders, thus creating more uncertainty.Real-debrid will 100% return a 200 success on an actual user folder, this script will log those entries for analysis.
While this script works, it has known downfalls when compared to professional toolsets such as BurpSuite. When I script like this, it’s more often to help verify questions I may be asking in a way which is repeatable. And while finding these resource folders would be great, there are considerable things to keep in mind. After reading through the code again, specificially the image posted below, you should note that the amount of character letters compared to digits is in roughly a 2:1 ratio. With A-Z being 26 characters, and 0-9 being 10, this method has a higher probability of generating a string with more letters almost twofold. So take it for what it is!

II. It’s basic.#!/usr/bin/env bash
./rd-fuzzer.sh [100] [2] # Where [100] is unique URLs to fuzz and [2] is delay in seconds
clear
strCt=$1 && x=0 && lstFile=”/tmp/str.lst”
cURL=”” && pURL=””
tryCnt=0 && delay=$2
function gRndStr() {
cat /dev/null > $lstFile
while [ $x -lt $strCt ]; do
echo “$(tr -dc A-Z0-9 </dev/urandom | head -c 13)” >> $lstFile
(( x++ ))
done
}
function testURL() {
response=$(curl -L –write-out ‘%{http_code}’ –silent –output /dev/null $1)
echo -e “$response | $1”
case $response in
‘200’) echo “Success: $1” >> live.lst; echo ‘Hit! Saved to ./live.lst’ ;;
‘403’) ;; # 403 - Permission denied, default response for non-existing links
*) echo “Unexpected result: $1” >> live.lst ;; # Log to review
esac
}
function genURL() {
pURL=$cURL
cURL=”https://my.real-debrid.com/$1“
testURL $cURL
}
main() {
echo -e “\nReal-Debrid Cloud Storage Fuzzer // dtrh.net”
echo -e “——————————–”
echo -e “Fuzzing $strCt urls with $delay seconds delay\n\n”
echo -e “HTTP Code | Fuzz Target\n”
gRndStr $strCt
for l in $(cat $lstFile); do
echo $(genURL $l)
[ ! $delay ] && break || sleep $delay
done
}
main
扫一扫,分享到微信
Fun with ASCII art and SSH bannersReverse Tunneling SSH© 2024 KBSHexoThemeYiliaby Litten
tag:
WORK IN PROGRESS