I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.

109 points

Google just has a lot of storage space. They have dozens of data centers, each of which is an entire building dedicated to nothing but storing servers, and they’re constantly adding more servers to previous data centers and building new data centers to fit even more servers into once the ones they have are full.

IIRC, estimates tend to put Google’s current storage capacity somewhere around 10-15 exabytes. Each exabyte is a million terabytes. Each terabyte is a thousand gigabytes. That’s 10-15 billion gigabytes. And they can add storage faster than storage is used up, because they turn massive profits that they can use to pay employees to do nothing but add servers to their data centers.

Google is just a massive force in terms of storage. They probably have more storage than any other organization on the planet. And so, they can share a lot of it for free, because they’re still always turning a profit.

permalink
report
reply
35 points
*

There are also techniques where data centers do offline storage by writing out to a high volume storage medium (I heard blueray as an example, especially because it’s cheap) and storing it in racks. All automated of course. This let’s them store huge quantities of infrequently accessed data (most of it) in a more efficient way. Not everything has to be online and ready to go, as long as it’s capable of being made available on demand.

permalink
report
parent
reply
24 points

You can feel it on YouTube when you try to access an old video that no one has watched in a long time.

permalink
report
parent
reply
33 points

every time it lags, it’s because youtube has to send someone down to the basement to retrieve the correct blu-ray disc from a storage room

permalink
report
parent
reply
1 point

That’s the difference between getting a video served off a disk off in some random DC in some random state vs. the videos being served off a cache that lives at your ISP.

It’s not offline storage vs. disk, it’s a special edge-of-network cache vs. a video that doesn’t live in that cache, but is still on a hard drive.

permalink
report
parent
reply
13 points

It’s far more likely that Google, AWS, and Microsoft are using tape for high-volume, long-term storage.

According to diskprices.com, these are the approximate cost of a few different storage media (assuming one is attempting to optimize for cost):

  • Tape $0.004 - $0.006 / GB
  • HDD $0.009 - $0.012 / GB
  • BluRay $0.02 - $0.04 / GB
  • SSD $0.035 - $0.04 / GB
  • microSD $0.065 - $0.075 / GB
permalink
report
parent
reply
5 points

Tape archives are neat too, little robot rearranging little tape drives in his cute little corridor

permalink
report
parent
reply
7 points

Tape drives are still in use in a lot of places too. Enormous density in storage for stuff that’s in “cold storage”

permalink
report
parent
reply
6 points

I don’t think the storage density of a blu ray is anywhere near good enough for that use

permalink
report
parent
reply
5 points

Doesn’t BR only have like 100 gigs capacity? That would take a shitton of space.

They use tapes for backups, but indeed there ought to be something inbetween.

permalink
report
parent
reply
3 points

https://engineering.fb.com/2015/05/04/core-data/under-the-hood-facebook-s-cold-storage-system/

This is an article from 2015 where Facebook/Meta was exploring Blu-ray for their DCs. You’re definitely right though. Tape is key as the longest term storage.

permalink
report
parent
reply
4 points

They’re really using optical storage as a backup that can then be near-instantaneously accessed? That’s awesome.

permalink
report
parent
reply
2 points

Where did you get that from?

permalink
report
parent
reply
2 points

Super cool, blew my mind! I would love to see it in operation. The logistics from the machine side + the storage heuristics for when to store to a disc that’s write-only sounds like a really cool problem.

permalink
report
parent
reply
17 points

The 10-15 EB estimate from XKCD was 10 years ago.

permalink
report
parent
reply
12 points

Let’s be honest, it isn’t “free”. The user is giving their own data to Google in order to use there services; and data is a commodity.

permalink
report
parent
reply
4 points

Kinda starting to seem like “data” is becoming less and less valuable, or am I wrong?

permalink
report
parent
reply
6 points

well there’s more and more of it so the value per byte is decreasing as everything tracks you and there’s only so much info you can get

permalink
report
parent
reply
8 points

And thats just Google. Amazon and Microsoft also run also have massive massive data capacity that runs large chunks of the internet. And then you get into the small and medium sized hosting companies, that can be pretty significant on their own.

permalink
report
parent
reply
3 points

15 exabytes sounds low. Rough math, 1 20 TB hard drive per physical machine with 50,000 physical machines is one exabyte raw storage. I bet 50,000 physical machines is a small datacenter for Google.

permalink
report
parent
reply
1 point

It’s still wild to imagine. That’s amillions hard drives, times a couple times over for redundancy over regions and for failures. Then the backups.

Remember when Google started by networking old office computers?

permalink
report
parent
reply
22 points

It’s the same story with AWS as well. They use vast amounts of storage and leverage different tiers of storage to get the service they want. It’s funny but they have insane amounts of SD cards ( cheapest storage available at the size) and use that for some storage and just replicate things everywhere for durability. Imagine how small 256GB SD cards are and they you have hardware to plug-in 200 of them practically stacked on top of each other. The storage doesn’t need to be the best, it just needs to be managed appropriately and actively to ensure that data is always replicated as devices fail. That’s just the cooler tier stuff. It gets complex as the data warms.

permalink
report
reply
8 points

SD cards? I’m very skeptical. Do you have a source?

permalink
report
parent
reply
7 points

Yeah this seems false. SD cards are unreliable, hard to keep track of, and don’t actually store that much data for the price. I do think they use tapes though to store long term, low traffic data.

permalink
report
parent
reply
1 point
*

We use LTO tapes in Hollywood to back up raw footage; it wouldn’t surprise me if AWS uses tapes for glacier.

I got a tour of Iron Mountain once (where we sent tapes for long term archival). They had a giant room with racks and racks of LTOs, and a robot on rails that would make copies of each tape at regular intervals to keep the data from corrupting. It looked kinda like the archive room in Rogue One. Wouldn’t surprise me if Iron Mountain was an inspiration for the design. Super interesting.

permalink
report
parent
reply
1 point

Used to work there but it’s been a few years so maybe things changed but that was how we originally got super cheap and durable s3.

permalink
report
parent
reply
5 points

Ha, I had no idea data centers use SD cards! It makes sense in hindsight, but it’s still funny to think about

permalink
report
parent
reply
21 points

Absolutely huge data centers.

A full third of my towns real estate is currently covered with a sprawling Google data center. Just enormous.

permalink
report
reply
2 points
*

NoVa?

permalink
report
parent
reply
3 points

One of the largest.

permalink
report
parent
reply
1 point

I love driving through it when I go up to Winchester. Data center galore.

permalink
report
parent
reply
1 point
*

Yeah, 10 or 15 years ago I read an article about how Google brings up new storage modules when they need to expand, and their modules are essentially shipping containers full of hard drives.

permalink
report
parent
reply
1 point

I lived in Herndon VA for work for a while.

Was so nice gaming with 2 ping.

permalink
report
parent
reply
20 points

Not only that but for each video on YouTube there are different versions for each resolution. So if you upload a 1080p video, it gets converted to 1080p AVC/VP9, 720p AVC/VP9, 480p… also for the audio.

If you run youtube-dl -F <youtube url> you will see different formats.

permalink
report
reply
7 points

Does youtube actually store copies of each one? Or does it store 1 master copy and downsaple as required in real time. Probably stores it since storage is cheaper than cpu time

permalink
report
parent
reply
9 points

If it converts every video in realtime it will require a lot of CPU per server, it’s cheaper to store multiple copies. Also the average video isn’t more than some 300MB, less if it’s lower quality.

Anyone with Plex or Jellyfin knows that it’s better to have the same movie in both qualities (1080,720) the transconding to avoid CPU usage.

It’s possible to have fast transconding with GPUs, but with high so many users on youtube that will require a lots of power and high energy prices, store is cheaper.

permalink
report
parent
reply
5 points

I believe they store and that’s why it processes lowest res first and works up

permalink
report
parent
reply
-5 points

It’s transposed on the fly, this is a fairly simple lambda function in AWS so whatever the GCP equivalent is. You can’t up sample potato spec, the reason it looks like shit is due to bandwidth and the service determining a lower speed than is available.

permalink
report
parent
reply
2 points

It probably depends on how popular the video is anticipated to be.

I remember hearing that something like 80% of uploads to YouTube are never watched. 80% of the remaining 20% are watched only a handful of times. It’s only a tiny fraction that are popular, and the most popular are watched millions of times.

I’d guess that they don’t transcode the 80% that nobody ever watches. They definitely transcode and cache the popular 4%, but who knows what they do with the 16% in the middle that are watched a few times, but not more than 10x.

permalink
report
parent
reply
1 point

In real time would mean more cpu usage every time someone plays it. If converted in advance, they only need to do it once with the most effective codecs.

permalink
report
parent
reply
1 point

I’m keen to know about how large these source files are for YouTube compared to the 720/1080 quality ones were see on the front-end. I remember them using really impressive compression but that the bitrate was super low to keep the since small.

If they’re reducing a 10m 1080p file from 400MB down to 40MB then that’s a good gain

permalink
report
parent
reply
14 points

Enormous servers all around the world and over the years storage is getting smaller and smaller proportionally to how much you can store on it

permalink
report
reply

No Stupid Questions

!nostupidquestions@lemmy.world

Create post

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others’ questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That’s it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it’s in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.

Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.

Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

Community stats

  • 9.1K

    Monthly active users

  • 3.2K

    Posts

  • 128K

    Comments