Avatar

SmoothIsFast

SmoothIsFast@citizensgaming.com
Joined
1 posts • 308 comments
Direct message

Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.

permalink
report
parent
reply

That’s fucked up.

permalink
report
reply

Oh, no, educated workers who don’t want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.

permalink
report
parent
reply

The trajectory was chosen by NASA because the Orion capsule on top of the SLS rocket do not have enough efficiency to be on a low regular lunar orbit while landing and bringing back astronauts. This trajectory has nothing to do with SpaceX.

Nor did I say it did, I said some brain dead idiots sent the contract off to a company who designed a craft incapable of doing what we have done previously, congrats Lockheed for fucking up our next moon program. It’s you who equated that to SpaceX lmaoo

When comparing the one rocket to land on the moon to the 15 launches (thank you for writing launches and not rockets, as Destin Sandlin wrongly did) is because the mass delivered to the surface is gigantic compared to Apollo. Why? Because we do not want to say “we did it!” We want to say “we live there!”.

I mean it really doesn’t matter are you going to have astronauts just chilling for like a year in orbit waiting for those launches, racking up radiation? Saying the reason we need 15 launches for starship is specifically due to mass is such a cop-out. It’s due to how limited the amount of fuel we can send up to refuel in orbit is, it’s fucking stupid at our current level of space infrastructure. We still haven’t even tested it, what we need another 4 decades for this terrible plan to come to fruition? Take note of what the Apolo engineers stated as far as stepping stones in development. If you take too big of leaps, you will not adequately be able to evaluate what when wrong if something does, take to small of steps and you will never reach the goal. We decided to take such massive leaps with no forethought on its efficiency.

Can people stop saying SpaceX rockets explode? They do not.

No, that is precisely what occurred with starship. You can see the Shockwave from the explosion, which means you had the oxidizer mix with the propelent before exploding during the flip phase, that’s a major fucking failure. It was not a rupture like previous issues nor was it terminated, it fucking exploded lmao. The worst part all that lovely telemetry that’s gonna help them out gave zero indication of said catastrophic failure so that’s gonna be such great info for them right? Just like the first test that failed when they knew the pad wouldn’t be strong enough and caused damage to the rocket, meaning they got no actionable data?

As of now, and evolving for Starship:
$7B cost, 4 from NASA for the first 2 missions
11 years for the first tests, still no rocket
Can bring 220,00lb and 35,000ft³ to the moon
And they still and up with a rocket NASA can continue to use at very low price (less than 25% than SLS per mission)

Star ship has not been a proven concept and is still actively in development, these numbers mean nothing right now. With massive issues looming and 90% of what’s needed not even tested yet but go ahead keep riding daddy musk as if he isn’t killing good ideas with lofty moving goal posts and a complete lack of understanding for what’s being developed.

permalink
report
parent
reply

Your description is how pre-llm chatbots work

Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.

Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.

Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

Emergent properties are literally the only reason llms work at all.

No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.

permalink
report
parent
reply

I’m just gonna leave this here as you want to buy into all the bullshit surrounding starship lmao

https://www.youtube.com/watch?v=K5GevpAGDWE

permalink
report
parent
reply

No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.

permalink
report
parent
reply

You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.

That’s not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That’s all the paper is describing.

permalink
report
parent
reply