Avatar

ZickZack

ZickZack@kbin.social
Joined
0 posts • 39 comments
Direct message

It really depends on what you want: I really like obsidian which is cross-platform and uses basically vanilla markdown which makes it easy to switch should this project go down in flames (there are also plugins that add additional syntax which may not be portable, but that’s as expected).

There’s also logseq which has much more bespoke syntax (major extensions to markdown), but is also OSS meaning there’s no real danger of it suddenly vanishing from one day to the next.
Specifically Logseq is much heavier than obsidian both in the app itself and the features it adds to markdown, while obsidian is much more “markdown++” with a significant part of the “++” coming from plugins.

In my experience logseq is really nice for short-term note taking (e.g. lists, reminders, etc) and obsidian is much nicer for long-term notes.

Some people also like notion, but i never got into that: it requires much more structure ahead of time and is very locked down (it also obviously isn’t self-hosted). I can see notion being really nice for people that want less general note-taking and more custom “forms” to fill out (e.g. traveling checklists, production planning, etc…).

Personally, I would always go with obsidian, just for the piece of mind that the markdown plays well with other markdown editors which is important for me if I want a long-running knowledge base.
Unfortunately I cannot tell you anything with regards to collaboration since I do not use that feature in any note-taking system

permalink
report
parent
reply

And don’t forget that even after that you still have to watch baked-in “This video is sponsored by <insert shady company here>” adds since the actual revenue that gets passed to creators from youtube is so low that to keep the ship afloat they have to look for additional revenue streams.

permalink
report
parent
reply

They choose to do this. Delicious has historically been a point and click developer, but they wanted to diversify, especially since their previous title “pillars of the earth” flopped. They first tried their have at rts with “a year of rain” which is simply not that good, and then looked into Gollum.
You also can’t raid make the argument that the project was rushed out the door, considering the game was supposed to release in 2021 (two years ago).

They tried something they had no experience in, not through coercion but because they wanted to, and produced a game of shockingly low quality. Since this wasn’t the first flop, but just the latest in a huge series of flops, (though it was the most expensive and high profile one) the studio closed.

permalink
report
parent
reply

The car is the same as last week.
You have to remember that this is a track that verstappen really doesn’t like: last year’s race at Singapore was also his worst.
Usually verstappen drives ~3 tenths faster than Perez, which, if he did that this week, would also put him up there…

IMO this is less of a case that the car is worse and more that verstappen isn’t able to get 100% from his car.

permalink
report
reply

It’s a different paper (e.g. https://www.nature.com/articles/s41586-022-05294-9) from a different researcher (specifically Ranga Dias). This is not connected to the recent non-peer reviewed https://arxiv.org/abs/2307.12008

permalink
report
parent
reply

I think you also have to keep in mind the position that de Vries and redbull is in:

  • Redbull is looking for a second verstappen-level driver. That’s always been the case not only for redbull, but all tier 1 teams: Their aspirations are championships, not points or even podiums.
  • De Vries is a 28 year old rookie. That’s usually the time that drivers retire or lean on their superior experience to make up for their loss in reaction speed and overall pace. The problem is that De Vries has no experience, while being older than Verstappen by close to three years. The fact that he got to race at all is a miracle: He would have to beat Tsunoda every week by quite a margin to become relevant for RedBull. If he doesn’t become relevant for redbull, then why have him at alpha tauri?

Meanwhile they have a young driver in the form of tsunoda which exists in a limbo due to him having nothing to compare against: He could be the fastest driver on the planet in a trash car, or he could be underdelivering without anyone noticing due to the lack of comparison.
This is bad for two reasons:

  1. you don’t know whether tsunoda is an option for redbull
  2. you have no idea how good alpha tauri is over all, which is doubly bad considering that they want to make major changes to how alpha tauri operates.

On the other hand, you have a perfectly good Ricciardo sitting on his hands that performed really well at silverstone. Realistically, you aren’t going to lose anything from having Riccardo drive the rest of the season compared to having de Vries drive, but you have to potential upside of more context to the quality of tsunoda and the team, which you wouldn’t get otherwise.

In general I’m more suprised that they ever gave De Vries a chance considering his age and the context to his big achievements:
In formula 2 his stiffest competitor was Nicholas Latifi (He won with 266 vs Latifi’s 214 points) in what can be described as a dud year after the majority of now F1 mainstays had already graduated (he also needed 3 years to win F2, which is never a good sign).
If you have ever seen an formula E race, you will notice that it is quite a chaotic crash-fest with very weird rules and other nonsense. Just not crashing and not driving to quickly can get you really far by surviving the carbon-fiber mayhems and fuel-conservation issues.
To put it into perspective, here are the race records in the year that De Vries won formula E [1st, 9th, retired, retired, 1st, 16th, retired, 9th, retired, 13th, 18th, 2nd, 2nd, 22nd, 8th] or, in short if we ignore all DNFs we get a mean position of 9th!

In short, there’s a reason why Mercedes never even tried to get him an F1 spot: He’s not a bad driver, but being “not a bad driver” is insufficient for a top team like mercedes and redbull. There’s little incentive to put him into any car, even less so nowadays considering his age.

permalink
report
parent
reply

That’s not what lossless data compression schemes do:
In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
For example, one of the simplest and now ancient algorithms LZW does the following:

  • Initialize the dictionary to contain all strings of length one.
  • Initialize the dictionary to contain all strings of length one.
  • Emit the dictionary index for W to output and remove W from the input.
  • Add W followed by the next symbol in the input to the dictionary.
  • repeat
    Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
“Information” here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
Logically this makes sense:
Let’s say you have a system that measures earthquakes. As one would expect, most of the time, let’s say 99% of the time, you will see “no earthquake”, while in 1% of the cases you will observe “earthquake”.
Since “no earthquake” is a lot more common, the information gain is relatively small (if I told you “the system said no earthquake”, you could have guessed that with 99% confidence: not very surprising).
However if I tell you “there is an earthquake” this is much more important and therefore is worth more information.

From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

However, we can do even better:
Instead of just considering individual characters, we can also add in character pairs!
Of course, it doesn’t make sense to add in every possible character pair, but for some of them it makes a ton of sense:
For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence “the” and save a ton of bits!
To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
We can still do that: just scan the text 600 times.
With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
But with 4 characters its impossible: you already have half a million combinations!
In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
We can try to predict it!
This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it’s just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.

permalink
report
parent
reply

They will make it open source, just tremendously complicated and expensive to comply with.
In general, if you see a group proposing regulations, it’s usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn’t really have a technical edge against anyone else, therefore they run to congress to “please regulate us”.
Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

There are so many ways this can be broken in intentional or unintentional ways. It’s also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
This is not literally trying to force everyone to get a license for producing creative or factual work but it’s very close since you can easily discriminate against any creative or factual sources you find unwanted.

In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.

permalink
report
parent
reply

Everything using the activityPub standard has open likes (see https://www.w3.org/TR/2018/REC-activitypub-20180123/ for the standard), and logically it makes sense to do this to allow for verification of “likes”:
If you did not do that, a malicious instance could much more easily just shove a bunch of likes onto another instance’s post, while, if you have “like authors” it’s much easier to do like moderation.
Effectively ActivityPub treats all interactions like comments, where you have a “from” and “to” field just like email does (just imagine you could send messages without having an originator: email would have unusable levels of spam and harassment).
Specfically, here is an example of a simple activity:

POST /outbox/ HTTP/1.1
Host: dustycloud.org
Authorization: Bearer XXXXXXXXXXX
Content-Type: application/ld+json; profile="https://www.w3.org/ns/activitystreams"

{
  "@context": ["https://www.w3.org/ns/activitystreams",
               {"@language": "en"}],
  "type": "Like",
  "actor": "https://dustycloud.org/chris/",
  "name": "Chris liked 'Minimal ActivityPub update client'",
  "object": "https://rhiaro.co.uk/2016/05/minimal-activitypub",
  "to": ["https://rhiaro.co.uk/#amy",
         "https://dustycloud.org/followers",
         "https://rhiaro.co.uk/followers/"],
  "cc": "https://e14n.com/evan"
}

As you can see this has a very “email like” structure with a sender, receiver, and content. The difference is mostly that you can also publish a “type” that allows for more complex interactions (e.g. if type is comment, then lemmy knows to put it into the comments, if type is like it knows to put it to the likes, etc…).
The actual protocol is a little more complex, but if you replace “ActivityPub” with “typed email” you are correct 99% of the time.

The different services, like lemmy, kbin, mastodon, or peertube are now just specific instantiations of this standard. E.g. a “like” might have slightly different effects on different services (hence also the confusion with “boosting” vs “liking” on kbin)

permalink
report
reply

Go to the relevant domain’s front page (e.g https://kbin.social/d/kbin.social for kbin.social).
The URL scheme is “https://kbin.social/d/DOMAINHERE” assuming you are currently on kbin.social.
On the right in the sidebar you can see “Domain” and below that options to subscribe or to block.
Really it’s the same thing as magazines, just that you generally don’t visit the domain itself.

permalink
report
parent
reply