Avatar

ZickZack

ZickZack@fedia.io
Joined
0 posts • 19 comments
Direct message

exactly: It’s “open source” like android. The core android is open source (in many cases because they are required to), but that does not include anything that makes the actual system work for normal users. The core android is open source (“Android Open Source Project”), but that includes practically nothing: Essentially the stuff that is in there are things that have to be open source (like the linux kernel they use). However, if you want to have the system “practically useable” you need a lot more, which is usually the “Google Mobile Services”, which are proprietary. You are also generally required to install all items in the GMS, i.e. even if you only need the play store, you still have to install google chrome.

Further, the android name and logo are trademarked by google, so even if you want to roll your own android, you would not be allowed to call it android. WearOS is essentially the same thing: The android subsystem is open, the actual thing you call WearOS (plus trademarks, etc.) are not.

permalink
report
parent
reply

Here is the more burning question: What is worse? Case “It was not made to design standards”: Then boing might have a problem in their manufacturing processes, which is going to have ramifications on the entire fleet. This would be bad, but fixable.

Case “It was made to design standards”: In that case you only have a problem with this one type of jet, but you have a problem in your fundamental design, which might ground the entire fleet (again).

permalink
report
reply

And that would be completely legal, just like any random guy on deviantart can draw something in the style of e.g. Picasso without getting into trouble (unless of course they claim it was painted by picasso, but that should be obvious).

permalink
report
parent
reply

train one with all the Nintendo leaks

This is fine

generate some Zelda art and a new Mario title

This is copyright infringement.

The ruling in japan (and as I predict also in other countries) is that the act of training a model (which is just a statistical estimator) is not copyrightable, so cannot be copyright infringement. This is already standard practice for everything else: You cannot copyright a mathematical function, regardless of how much data you use to fit to it (that is sensible: CERN has fit physics models to petabytes worth of data, that doesn’t mean they hold a copyright on laws of nature, they just hold the copyright on the data itself). However, if you generate something that is copyrighted, that item is still copyrighted: It doesn’t matter whether you used an AI image generator, photoshop, or a tattoo gun.

permalink
report
parent
reply

First, I don’t think that’s the right comparison. You need to compare them to taxis.

It’s not just that, you generally have a significant distribution shift when comparing the self-drivers/driving assistants to normal humans. This is because people only use self-driving in situations where it has a chance of working, which is especially true with stuff like tesla’s self-driving where ultimately people are not even going to start the autopilot when it gets tricky (nevermind intervening dynamically: they won’t start it in the first place!)

For instance, one of the most common confounding factors is the ratio of highway driving vs non-highway driving: Highways are inherently less accident prone since you don’t have to deal with intersections, oncoming traffic, people merging in from every random house, or children chasing a ball into the street. Self-drivers tend to report a lot more highway traffic than ordinary drivers, due to how the availability of technology dictates where you end up measuring. You can correct for that by e.g. explicitly computing the likelihood p(accident|highway) and use a common p(highway) derived from the entire population of car traffic.

permalink
report
parent
reply

Not necessarily: there have been recent works that indicate that filtering effects of fine tuned LLMs greatly improves the data efficiency (e.g phi-1). Further, if you have e.g. human selection on top of LLM generated content you can get great results as the LLM generation can be used as a soft curriculum, with the human selection biasing towards higher quality.

permalink
report
parent
reply

Honestly, I recommend everyone without existing Linux experience to use Fedora: it’s reasonable modern (nice for, e.g. gaming), while also not being a full rolling release model like Arch (which needs expertise to fix in case something breaks). It’s also reasonably popular, meaning you will find enough guidance in case something does break.

permalink
report
reply

Basically the stuff they need to detect whether ads are actually shown needs information of the device state that are generally not available according to Article 5(3) ePR.

permalink
report
parent
reply

The problem is that the model is actually doing exactly what it’s supposed to, it’s just not what openai wants it to do. The reason the prompt extraction method works is because the underlying statistical model gets shifted far outside the domain of “real” language. In that case the correct maximizing posterior becomes a sample from the prior (here that would be a sample from the dataset, this is combined with things like repetition penalties).

This is the correct way a statistical estimator is supposed to work, but not the way you want it to work. That’s also why they can’t really fix this: there’s nothing broken to begin with (and “unbreaking” it would almost surely blow something take up)

permalink
report
parent
reply

Could be none of them and the complaint comes from of the academy teams

permalink
report
parent
reply