Yesterday I launched wormhol.org, a file sharing site that requires no emails (unlike WeTransfer), no accounts (unlike Google Drive or Dropbox) and has a high file size limit (unlike Firefox Send).
I built it about two years ago, but there were large design flaws that kept me from being proud of it. I had to take it down and rewrite it into wormhol.org.
I believe file sharing online should be easier than offline. Too many times I've found myself trying to figure out how to share files with a friend, only to find out I needed a Google/Microsoft/Dropbox account to do so! I don't have a none of those, and I wouldn't throw my "ditching Google" efforts out the window to share a stupid file that doesn't fit in an email.
And so I built wormhol.org (formerly files.bejarano.io).
- Files up to 5GB
- Unlimited files and downloads
- Files expire one week after upload
- No emails/accounts/logins required
- No user data collection whatsoever
- No ads, trackers or cookies
There's only one condition: use it strictly for sharing files between two users, don't host your site's images, don't share illegal or copyrighted media, don't abuse it, etc.
Let's be nice!
The technical details
Now these are even cooler!
The site is serverless and databaseless!.
The site is strictly built with Amazon Web Services's S3, API Gateway and Lambda services.
There's a single API call to request a presigned upload URL (so that your browser uploads straight into S3), powered by API Gateway and Lambda.
And both, site documents (landing page, about page, etc.) and files are stored in an S3 bucket in Stockholm, 🇸🇪.
That's why there's a file size limit, because S3 does not support more than
5 GB per
PutObject operation, I would have liked it to be truly unlimited, though.
This way, I have no servers to operate, and it scales
O(n) to load.
(If that buzzword even exists).
When a file is uploaded, instead of storing its name and size to a DynamoDB table and adding an API resource to pull the metadata into the download page, the Lambda function renders the download page HTML template with the metadata and stores it in the bucket. So when you access your file's download page, you're just pulling an HTML from S3, no compute required.
This way there are no database costs, which would be significantly higher than what it costs to store one download page (about
3 KiB) per file.
For more details, check out the code, it is open-source!
There's some actions that can be taken to improve overall user experience:
- Speeding up the single API call by switching to an edge-optimized API Gateway deployment
- Enabling S3 Transfer Acceleration to speed up uploads (not cheap!)
- Tweaking the UI based on user feedback
That's it! Feel free to use wormhol.org and tell me what you think!
Thanks for reading!
Feel free to contact me if I'm wrong about something or you have any doubts.
Have a nice day!