if you could pick a standard format for a purpose what would it be and why?

e.g. flac for lossless audio because…

(yes you can add new categories)

summary:

  1. photos .jxl
  2. open domain image data .exr
  3. videos .av1
  4. lossless audio .flac
  5. lossy audio .opus
  6. subtitles srt/ass
  7. fonts .otf
  8. container mkv (doesnt contain .jxl)
  9. plain text utf-8 (many also say markup but disagree on the implementation)
  10. documents .odt
  11. archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
  12. configuration files toml
  13. typesetting typst
  14. interchange format .ora
  15. models .gltf / .glb
  16. daw session files .dawproject
  17. otdr measurement results .xml
3 points

.mom for ascii written Your Mom jokes.

permalink
report
reply
-5 points

192 kHz for music.

The CD was the worst thing to happen in the history of audio. 44 (or 48) kHz is awful, and it is still prevalent. It would be better to wait a few more years and have better quality.

permalink
report
reply
-2 points
*
Removed by mod
permalink
report
parent
reply
16 points

Why? What reason could there possibly be to store frequencies as high as 96 kHz? The limit of human hearing is 20 kHz, hence why 44.1 and 48 kHz sample rates are used

permalink
report
parent
reply
3 points

That is not what 96khz means. It doesn’t just mean it can store frequencies up to that frequency, it means that there are 96,000 samples every second, so you capture more detail in the waveform.

Having said that I’ll give anyone £1m if they can tell the difference between 48khz and 96khz. 96khz and 192khz should absolutely be used for capture but are absolutely not needed for playback.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
7 points

It means it can capture any frequency up to half the sample rate, perfectly. The “extra detail” in the waveform is higher frequencies beyond the range of human hearing

permalink
report
parent
reply
4 points

That is what it means. Any detail in the waveform that is not captured by a 48kHz sample rate is due to frequencies that humans can’t hear.

permalink
report
parent
reply
2 points

this is a misconception about how waves are reconstructed. each sample is a single point in time. But the sampling theorem says that if you have a bunch of discrete samples, equally spaced in time, there is one and only one continuous solution that would hit those samples exactly, provided the original signal did not contain any frequencies above nyquist (half the sampling rate). Sampling any higher than that gives you no further useful information. There is stil only one solution.

tldr: the reconstructed signal is a continuous analog signal, not a stair step looking thing

permalink
report
parent
reply
6 points

On top of that, 20 kHz is quite the theoretical upper limit.

Most people, be it due to aging (affects all of us) or due to behaviour (some way more than others), can’t hear that far up anyway. Most people would be suprised how high up even e.g. 17 kHz is. Sounds a lot closer to very high pitched “hissing” or “shimmer”, not something that’s considered “tonal”.

So yeah, saying “oh no, let me have my precious 30 kHz” really is questionable.

At least when it comes to listening to finished music files. The validity of higher sampling frequencies during various stages in the audio production process is a different, way less questionable topic,

permalink
report
parent
reply
1 point

because if you use a 40 kHz signal to “draw” a 10 kHz wave, the wave will have only four “pixels”, so all the high frequencies have very low fidelity

permalink
report
parent
reply
1 point

As long as the audio frequency is less than half the sample rate, it is a mathematical function with only one (exact) wave that is able to fit all 4 points, so it is perfectly reconstructed. This video provides a great visualization of it https://www.youtube.com/watch?v=cIQ9IXSUzuM

permalink
report
parent
reply
11 points

I assume you’re gonna back that up with a double blind ABX test?

permalink
report
parent
reply
10 points
*

44 KHz wasn’t chosen randomly. It is based in the range of frequencies that humans can hear (20Hz to 20KHz) and the fact that a periodic waveform can be exactly rebuild as the original (in terms of frequency) when sampling rate is al least twice the bandwidth. So, if it is sampled at 44KHz you can get all components up to 22 KHz whics is more that we can hear.

permalink
report
parent
reply
4 points

this is wrong. the first thing done before playing one of those files is running ithe audio through a low pass filter that removes any extra frequencies 192khz captures. because most speakers can’t play them, and in fact would distort the rest of the sound (due to badly recreating them, resulting in aliasing).

192khz has a place, and it’s called the recording studio. It’s only useful when handling intermediate products in mixing and mastering. Once that is done, only the audible portion is needed. The inaudible stuff can either be removed beforehand, saving storage space, or distributed (as 192khz files) and your player will remove them for you before playback

permalink
report
parent
reply
-10 points

.exe to .sh low key turn all windows machines to Linux machines

permalink
report
reply
6 points

I’m not getting what you are trying to say

permalink
report
parent
reply
43 points

You’re comparing compiled executables to scripts, it’s apples and oranges.

permalink
report
parent
reply
-1 points
*

I, for one, label my apple crates as oranges.

winebin="wine"
if file "$1" | grep 64-bit; then
    winebin="wine64"
fi

printf '%s %q $@ || exit $?' "$winebin" "$1" > "$1.sh"
chmod +x "$1.sh"
permalink
report
parent
reply
27 points
*

I’d setup a working group to invent something new. Many of our current formats are stuck in the past, e.g. PDF or ODF are still emulating paper, even so everybody keeps reading them on a screen. What I want to see is a standard document format that is build for the modern day Internet, with editing and publishing in mind. HTML ain’t it, as that can’t handle editing well or long form documents, EPUB isn’t supported by browsers, Markdown lacks a lot of features, etc. And than you have things like Google Docs, which are Internet aware, editable, shareable, but also completely proprietary and lock you into the Google ecosystem.

permalink
report
reply
13 points
14 points

Epub isn’t supported by browsers

So you want EPUB support in browser and you have the ultimate document file format?

permalink
report
parent
reply
13 points

It would solve the long-form document problem. It wouldn’t help with the editing however. The problem with HTML as it is today, is that it has long left it’s document-markup roots and turned into an app development platform, making it not really suitable for plain old documents. You’d need to cut it down to a subset of features that are necessary for documents (e.g. no Javascript), similar to how PDF/A removes features from PDF to create a more reliable and future proof format.

permalink
report
parent
reply
4 points

Weasyprint kinda is that, except that it’s meant to be rendered to PDF.

permalink
report
parent
reply
1 point

Can you explain why you need browser support for epub?

permalink
report
parent
reply
7 points

EPubs are just websites bound in xhtml or something. Could we just not make every browser also an epub reader? (I just like epubs).

permalink
report
parent
reply
8 points

They’re basically zip files with a standardized metadata file to determine chapter order, index page, … and every chapter is a html file.

permalink
report
parent
reply
2 points

That’s the idea, and while at it, we could also make .zip files a proper Web technology with browser support. At the moment ePub exists in this weird twilight where it is build out of mostly Web technology, yet isn’t actually part of the Web. Everything being packed into .zip files also means that you can’t link directly to the individual pages within an ePub, as HTTP doesn’t know how to unpack them. It’s all weird and messy and surprising that nobody has cleaned it all up and integrated it into the Web properly.

So far the original Microsoft Edge is the only browser I am aware of with native ePub support, but even that didn’t survive when they switched to Chrome’s Bink.

permalink
report
parent
reply
2 points
*

Microsoft Edge’s ePub reader was so good! I would have used it all the time for reading if it hadn’t met its demise. Is there no equivalent fork or project out there? The existing epub readers always have these quirks that annoy me to the point where I’ll just use Calibre’s built in reader which works well enough.

permalink
report
parent
reply
89 points
*

zip or 7z for compressed archives. I hate that for some reason rar has become the defacto standard for piracy. It’s just so bad.

The other day I saw a tar.gz containing a multipart-rar which contained an iso which contained a compressed bin file with an exe to decompress it. Soooo unnecessary.

Edit: And the decompressed game of course has all of its compressed assets in renamed zip files.

permalink
report
reply
35 points

It was originally rar because it’s so easy to separate into multiple files. Now you can do that in other formats, but the legacy has stuck.

permalink
report
parent
reply
11 points

Not just that. RAR also has recovery records.

permalink
report
parent
reply
19 points

.tar.zstd all the way IMO. I’ve almost entirely switched to archiving with zstd, it’s a fantastic format.

permalink
report
parent
reply
3 points

The only annoying thing is that the extension for zstd compression is zst (no d). Tar does not recognize a zstd extension, only zst is automatically recognized and decompressed. Come on!

permalink
report
parent
reply
2 points
*

If we’re being entirely honest just about everything in the zstd ecosystem needs some basic UX love. Working with .tar.zst files in any GUI is an exercise in frustration as well.

I think they recently implemented support for chunked decoding so reading files inside a zstd archive (like, say, seeking to read inside tar files) should start to improve sooner or later but some of the niceties we expect from compressed archives aren’t entirely there yet.

Fantastic compression though!

permalink
report
parent
reply
5 points

why not gzip?

permalink
report
parent
reply
18 points

Gzip is slower and outputs larger compression ratio. Zstandard, on the other hand, is terribly faster than any of the existing standards in means of compression speed, this is its killer feature. Also, it provides a bit better compression ratio than gzip citation_needed.

permalink
report
parent
reply
5 points
*

gzip is very slow compared to zstd for similar levels of compression.

The zstd algorithm is a project by the same author as lz4. lz4 was designed for decompression speed, zstd was designed to balance resource utilization, speed and compression ratio and it does a fantastic job of it.

permalink
report
parent
reply
51 points

A .tarducken, if you will.

permalink
report
parent
reply
10 points

Ziptarar?

permalink
report
parent
reply
4 points

.tar.xz masterrace

permalink
report
parent
reply
2 points

This comment didn’t age well.

permalink
report
parent
reply

Linux

!linux@lemmy.ml

Create post

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

  • Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
  • No misinformation
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

Community stats

  • 7.6K

    Monthly active users

  • 6.6K

    Posts

  • 179K

    Comments